Nederlands
  nl
English
  en
contact veelgestelde vragen
SMB
 
LLM Design Patterns
Hoofdkenmerken
Auteur: Ken Huang
Titel: LLM Design Patterns
Uitgever: Packt Publishing
ISBN: 9781836207023
ISBN boekversie: 9781836207030
Editie: 1
Prijs: € 39,56
Inhoudelijke kenmerken
Taal: English
Imprint: Packt Publishing
Technische kenmerken
Verschijningsvorm: E-book
 

Inhoudsopgave:

\u003cp\u003e\u003cb\u003eExplore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniques\u003c/b\u003e\u003c/p\u003e\u003ch4\u003eKey Features\u003c/h4\u003e\u003cul\u003e\u003cli\u003eLearn comprehensive LLM development, including data prep, training pipelines, and optimization\u003c/li\u003e\u003cli\u003eExplore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents\u003c/li\u003e\u003cli\u003eImplement evaluation metrics, interpretability, and bias detection for fair, reliable models\u003c/li\u003e\u003cli\u003ePrint or Kindle purchase includes a free PDF eBook\u003c/li\u003e\u003c/ul\u003e\u003ch4\u003eBook Description\u003c/h4\u003eThis practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment.\nYou’ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems.\nBy the end of this book, you’ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values.\n\u003ch4\u003eWhat you will learn\u003c/h4\u003e\u003cul\u003e\u003cli\u003eImplement efficient data prep techniques, including cleaning and augmentation\u003c/li\u003e\u003cli\u003eDesign scalable training pipelines with tuning, regularization, and checkpointing\u003c/li\u003e\u003cli\u003eOptimize LLMs via pruning, quantization, and fine-tuning\u003c/li\u003e\u003cli\u003eEvaluate models with metrics, cross-validation, and interpretability\u003c/li\u003e\u003cli\u003eUnderstand fairness and detect bias in outputs\u003c/li\u003e\u003cli\u003eDevelop RLHF strategies to build secure, agentic AI systems\u003c/li\u003e\u003c/ul\u003e\u003ch4\u003eWho this book is for\u003c/h4\u003eThis book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must.
leveringsvoorwaarden privacy statement copyright disclaimer veelgestelde vragen contact
 
Welkom bij Smartbooks