Can you as a DevOps engineer not live without ChatGPT, but is its use prohibited at work due to policy or sensitive data? Are you curious about how you can host your own Large Language Model (LLM)?
During this one-day workshop, Generative AI for DevOps Engineers, you will learn at AT Computing how to set up and use your own local version of “ChatGPT”. This will be done completely on the basis of open-source technology, without sending your data to the cloud!
The day will start with an introduction to Generative AI and which LLMs are available and how to use them. We will dive into the hardware aspects and show you how to apply GPU acceleration on Virtual Machines and in Containers.
Through various practical exercises, you will set up your own Large Language Model server and even create your own “model”. You will learn how to connect a web-based client to your own Large Language Model, which will essentially create your own ChatGPT clone.
In addition, you will discover how to connect to the LLM API and Python, how to apply Retrieval Augmented Generation to use your own documents with your LLM, and how to analyze images with your own LLM. We will also cover log analysis using your LLM.