How To Install Code Llama Locally: Easy Windows Guide

Install Code Llama Locally

Facebook’s Meta AI continues to impress with its latest release, Code Llama! This guide will teach’s you how to install Code Llama Locally on your Windows Machine.


Meta AI has introduced Code Llama, an extension of Llama 2 tailored for coding needs. We have discussed Llama 2 in depth within our guide on How to Install Llama 2 Locally.

The Code Llama model bridges the gap between GPT’s 3.5 model and Llama, offering capabilities like debugging, code generation, and a focus on natural language about code. It is open source and a serious competitor to OpenAI’s Code Interpreter.

Performance Metrics

The Human Evaluation Benchmark reveals that the Code Llama python model outperforms GPT’s 3.5 model, scoring 53.7 compared to GPT’s 48.1.

Furthermore, when evaluated on the Method Body Property Prediction task, Llama’s code model showcases its prowess with a score of 56.2, surpassing GPT-3.5’s 52.2.

Model Variants

Code Llama has been fine-tuned from Llama 2’s base models and is available in three distinct flavors: Vanilla, Instruct, and Python. These models come in sizes of 7 billion, 13 billion, and 34 billion parameters. The smallest models can be run locally on desktops with decent GPUs.

How To Install Code Llama Locally

For those looking to install Code Llama locally, the process is streamlined using Text Generation Web UI, also known as Oobabooga.

We will use a simple installer called Pinokio to simplify the process. However, if you wish to download and install TextGen WebUI (Oogabooga), follow the instructions in our other Llama 2 guide.

Step 1: Install TextGen WebUI (Oogabooga):

TextGen WebUI, also known as “Oogabooga”, allows you to deploy any large language model (LLM) and many other models. It offers a fast installation method with a one-click installer.

Go to the Pinokio website and click and download the Pinokio for Windows version.

Pinokio for Windows Download

Once downloaded, extract the zip file and run the setup. Windows Defender prompts you with a warning stating that the app is unrecognized. You can select ‘Run anyway’.

Windows Defender

Follow the on-screen instructions to complete the installation.

Pinokio Application for Code Llama 2

After installation, launch the Pinokio application. Visit the ‘Discover’ page and scroll down to find ‘Text Generation Web UI’.

Discover Page Text Generation WebUI

Click on ‘Install’ next to Text Generation Web UI. The application will handle all the requirements and installation processes for you. Click on Install again.

Install Text Generation WebUI

During installation, you might be prompted to specify your GPU type (NVIDIA, AMD, Apple, or None).

Choose the appropriate option based on your system. For me, ‘A’ for Nvidia, and click done.

Nvidia Option Selected With A

Once the installation is successful, click ‘Start Chat Mode’.

TextGen Start Chat Mode Local Host Code Llama 2

Next, select ‘Open WebUi’, or in your web browser, go to localhost , and you should now see TextGen WebUI “Oogabooga”.

TextGen WebUI Oogabooga

Step 2: Install Code Llama

You can access the model directly from Meta AI’s website. However, there is a waitlist, which can take some time to complete. Instead, we can get it from Hugging Face. From the user named “TheBloke” on HuggingFace, who has kindly uploaded Code Llama, Llama 2, and many more!

In this guide, we will be using the CodeLlama-7B-Instruct-GPTQ. Depending on your GPU capabilities, choose the appropriate model size. If you don’t have a strong GPU, sticking with the 7-billion parameter model is recommended.

In my personal experience, I was not very impressed with Code Llama’s 7B model, and I opted for Code Llama 34B.

From Hugging Face, copy the title of the model card you want to use. See the image below.

TheBloke Code Llama's 7B model

Next, in the Text Generation Web UI, go to the “Model” tab, paste the model card title, and click “Download”.

Downloading Code Llama's 7B model

After downloading, click the “Refresh” button, select the model from the dropdown menu, and click “Load”.

Code Llama's 7B model

Once loaded, you can immediately start interacting with the Code Llama model. Just switch back to the ‘Chat’ tab.

successfully Installed Code Llama locally

Congratulations, you have successfully Installed Code Llama locally!

Code Llama Examples

Here are some examples of Code Llama from Meta’s latest blog post.

Gif of Code Llama showing code in Bash
Gif of Code Llama plotting csv data

Dataset Insights

Code Llama’s training data is impressive. The model was trained on 500 billion tokens during its initial phase. Predominantly, the dataset consists of near duplicates of publicly accessible code.

Only 8% of the dataset is derived from natural language datasets related to code, ensuring the model’s proficiency in understanding code nuances.

Generative AI Responses

Code Llama excels in generating detailed responses to code input prompts. For instance, when prompted with a bash command query, Code Llama provides a clear, thorough, and accurate command in response.

This level of detail and clarity is unparalleled, making Code Llama an invaluable tool for developers.

Dive Deeper with the Code Llama Research Paper

Meta AI has published a comprehensive research paper for those keen to delve into the technical intricacies and groundbreaking methodologies behind Code Llama.

This document delves into the model’s architecture, training data, benchmark performances, and safety evaluations.

The paper provides insights into the cutting-edge advancements that set Code Llama apart in AI-powered coding assistance. If you have the time, we recommend reading the research paper for a complete perspective on this transformative technology.


Code Llama represents a significant leap in AI-powered coding assistance. Its impressive benchmark performances and innovative training methodologies make it a standout choice for developers.

Whether you’re a seasoned developer or just starting, Code Llama promises to be a game-changer in your coding journey.

Please leave a comment if you run into any issues, and I will try and help.

Leave a Reply

Up ↑