We’re pleased to announce the launch of our redesigned website, created to offer you a better browsing experience and easier access to important information.
Bachelor of Science in Computer Science

Building Large Language Model Applications

Building Large Language Model Applications

Instructor : Hamza Farooq
[Adjunct Professor @ Stanford & UCLA | Ex- Research Manager @Google  | CEO, Traversaal.ai]


Co-Instructors

  • Kanwal Mehreen [Lead NLP Researcher @ Traversaal.ai | Technical Editor & Writer @ KDNuggets]
  • Saima Hassan [Assistant Professor @ KUST | AI/ML Researcher]

Our Building Large Language Model Applications course is thoughtfully designed to provide you with foundational and advanced skills in Generative AI, LLM architecture, prompt engineering, fine-tuning, and deployment. The focus will be on translating theoretical concepts into real-world applications, from creating effective prompts to deploying models on scalable platforms. Whether it’s crafting chatbots, optimizing semantic search, or training local LLMs, this course equips you with the tools to master the end-to-end lifecycle of Large Language Models.

Already Registered Register Now

Course Educational Objective

 To jump-start a career in Generative AI and LLM applications by mastering LLM architectures, fine-tuning techniques, and real-world deployment strategies.

  • To inspire innovative AI-driven solutions and entrepreneurial ideas in Generative AI.
  • To prepare skilled professionals contributing to Pakistan’s growth in AI and NLP technologies.

Enroll Poster

Course Duration

One semester (4 Months)

Pre-Requisite

To ensure students can effectively grasp and implement course concepts, the following knowledge areas are required:

  • Basic Python Programming
  • Machine Learning Fundamentals and Deep Learning Concepts

Grading

Three categories of registration

  1. Credit Hours and Certificate with Grade (Class participation, submission of all assignments, Capstone Project)
  2. Certification of Participation (Class participation and submission of all assignments)
  3. Certification of Attendance (Attendance only – submission of assignments optional)
Already Registered Register Now

Book



Picture1

Build LLM Applications (from Scratch) by Hamza Farooq

  • Module-1

     Foundations of NLP and LLMs

        • Introduction to NLP: Key concepts, challenges, and applications.
        • Text preprocessing techniques: Tokenization, stemming, lemmatization, and feature extraction.
        • Early NLP models: Naive Bayes, SVMs, RNNs, LSTMs, and GRUs.
        • Transition to Large Language Models: Evolution from traditional NLP to LLMs.
        • Overview of LLM applications and their significance in AI.
  • Module-2

    LLM Architectures and Training Fundamentals

        • Transformer models and self-attention mechanisms.
        • Architectures of popular LLMs: GPT, BERT, LLaMA, and their variations.
        • Fundamentals of LLM training: Pre-training, fine-tuning, and reinforcement learning.
        • Challenges in training: Addressing bias, hallucination, and evaluation metrics.
  • Module-3

    Prompt Engineering and Fine-Tuning

        • Basics of prompt engineering: Zero-shot, few-shot, and chain-of-thought prompting.
        • Advanced techniques: Prompt tuning and in-context learning.
        • Parameter-efficient fine-tuning strategies: LoRA and prefix-tuning.
        • Practical applications of prompt design for improved LLM performance.
  • Module-4

    Semantic Search and Embedding Models

        • Introduction to semantic search and word embeddings (Word2Vec, GloVe).
        • Building search engines using vector databases.
        • Techniques for analyzing text similarity and information retrieval.
  • Module-5

    Advanced LLM Applications

        • Text generation methods: Beam search, nucleus sampling, and output control techniques.
        • Multimodal models: Integration of vision and language with Vision Transformers (ViT).
        • Retrieval-Augmented Generation (RAG): Combining LLMs with external knowledge sources.
        • Deployment strategies: Local LLM deployment, quantization, and pruning.
  • Module-6

    Ethics, Safety, and Future Trends

        • Addressing ethical concerns: Bias, misinformation, and responsible AI development.
        • LLM safety and alignment techniques.
        • Future trends in AI: Multimodal models, LLM agents, and advancements in reasoning.
Scroll to Top
Skip to content