CS 6364_4364 Lab 8

  • Lab 8 is due by Thursday, 10/31 at 11:59 PM (EST)

Announcements

Goals

Goals for this Lab Assignment:

  • Learn and Implement the Attention Transformer Deep Learning Algorithm

  • Integrate Transformer Models into Final Project (If Applicable)

1. Learn and Implement

  • Understand the core principles of the attention mechanism and transformer architecture used in state-of-the-art deep learning models.

  • Gain hands-on experience implementing the algorithm in code.

  • Document Your Understanding of the Transformer Model in your ML_Algorithms.PDF

    • Write a Reflection: Summarize your current understanding of the transformer model. Focus on key concepts such as the attention mechanism, encoder-decoder architecture, and how transformers handle sequence data differently from traditional models like RNNs or LSTMs. Explain these concepts in your own words to demonstrate your grasp of the topic.

    • Identify Learning Gaps: As you work through the material, note any questions or challenges you encounter while learning about transformers. These could be areas where you feel uncertain, concepts that seem complex, or specific technical details related to implementation. Be as specific as possible to help guide your future study or discussions with peers or instructors.

    • Encourage Critical Thinking: Reflect on how the transformer model might apply to different types of problems, such as natural language processing or computer vision. Consider what makes transformers particularly effective and why they might be challenging to implement. Writing down these thoughts can help deepen your understanding.

2. Integrate it into Final Project (If Applicable)

  • For Experiment Papers: Explore potential use cases for transformer models in your project. Identify how the attention mechanism can improve your model’s performance or efficiency. Implement the transformer algorithm as part of your experiment and analyze the outcomes.

  • For Review Papers: Investigate recent advancements in transformer algorithms, focusing on their application in the past 3–5 years. Summarize key findings on how transformer-based models have impacted different fields, providing critical analysis of their strengths, limitations, and emerging trends.

3. Submit your code

  • For Experiment Papers:Submit your current code implementation. In your submission, highlight two to three baseline machine learning algorithms that you have compared with your approach. Include a clear comparison of performance metrics (e.g., accuracy, F1 score) and a brief explanation of why your transformer-based approach outperforms or complements the baselines.

  • For Review Papers: Summarize at least three GitHub repositories that you found most relevant and helpful to your research questions. For each repository, include a brief description of its purpose, the key insights gained, and how they relate to your review topic. Add these repositories as references to your own GitHub repository, demonstrating how they connect to your research.

  • Each team should submit one code file containing the code or summaries, with a file size limit of 5 MB or less.

  • Ensure the folder is well-organized with clear documentation (e.g., a README file explaining the structure and content).

4. Upate Your HCII paper

  • Abstract Revision: For this lab, make minimal changes to your 800-word abstract to reflect your most recent findings from your programming work. Focus on summarizing key results obtained through experimentation or implementation during this lab. Ensure these findings are succinctly integrated into the abstract, highlighting their impact on your overall research.

  • Results Summary: Summarize the key results of your programming experiments, focusing on how these results support or challenge the original research questions or hypotheses in your abstract. Briefly mention any improvements or insights gained from using the transformer model or other techniques implemented during this lab.

  • Begin Expanding to Full-Length Paper: Start extending the abstract into a 12-page full-length paper, which is due as part of your final project in six weeks.

5. Submission Guide

  • Each team should submit a single file on behalf of the entire group. Ensure all team members are CC d on the submission.

  • Name the file as follows: lab_8_lastname1_lastname2.zip

  • Your submission should include:

    1. Your_midterm_submission_order_Lab_8.PDF, the final paper draft in double-blind format (e.g., 5th_lab_8.pdf).

    2. ML_Algorithms.PDF, Document Your Understanding of the Transformer Model.

    3. Your submission should consist of a well-organized folder containing only your code files. Ensure the code is properly documented with clear comments explaining key functions and logic. A README file should be included to provide an overview of the code, instructions for running it, and any dependencies or requirements.

    4. Exclude Data Files: Do not include any data files in your submission. If your code relies on specific datasets, provide a link or instructions on how to access the data separately in your README file. Ensure your code is flexible enough to run with external data inputs.

6. Notes

  • Email the zip file for Lab 8 to x.qu@gwu.edu.

  • Lab assignments are typically released on Fridays and are due the following Thursday.

  • Lab 8 is due by Thursday, 10/31 at 11:59 PM (EST)