Overview#
Avoid the privacy issues of sending prompts to cloud based large language models (LLMs like ChatGPT) by installing and managing a local instance of Ollama, an open source downloadable LLM and API. We’ll set up a chatbot and practice directly querying the LLM for generative outputs, text clustering, and classification.
Learning Goals#
After completing this workshop, learners should be able to:
Describe control, customization, and data privacy issues of cloud-based AI systems
Install Ollama in your local environment
Deploy and a chatbot
Deploy a Retrieval-Augmented Generation (RAG) application
Understand how to conduct text clustering and classification on a corpus using LLMs
Prerequisites#
This workshop is not an introduction to programming; learners should have basic fluency in Python if they would like to code along.
Computing Requirements#
To code along during the workshop users must have:
A local computer on which they have administrative rights to install software
A working Python environment such as Jupiter, VSCode, or via the Command Line.