Created: 2023-10-16 13:39
Models
How to run
Some of the resources I used to run the model and solve the problems I found along the way:
- Running a Hugging Face Large Language Model (LLM) locally on my laptop
- Transformers v4.x: Convert slow tokenizer to fast tokenizer
- Running Falcon Inference on a CPU with Hugging Face Pipelines
- Question Answering over text files with Falcon 7B and LangChain
Code is here