LLM for Unity
v2.2.5
Create characters in Unity with LLMs!
|
LLM for Unity enables seamless integration of Large Language Models (LLMs) within the Unity engine.
It allows to create intelligent characters that your players can interact with for an immersive experience.
LLM for Unity is built on top of the awesome llama.cpp and llamafile libraries.
๐งช Tested on Unity: 2021 LTS, 2022 LTS, 2023
๐ฆ Upcoming Releases
Contact us to add your project!
Method 1: Install using the asset store
Add to My Assets
Window > Package Manager
Packages: My Assets
option from the drop-downLLM for Unity
package, click Download
and then Import
Method 2: Install using the GitHub repo:
Window > Package Manager
+
button and select Add package from git URL
https://github.com/undreamai/LLMUnity.git
and click Add
First you will setup the LLM for your game ๐:
Add Component
and select the LLM script.Download Model
button (~GBs).Load model
button (see LLM model management).Then you can setup each of your characters as follows ๐โโ๏ธ:
Add Component
and select the LLMCharacter script.Prompt
. You can define the name of the AI (AI Name
) and the player (Player Name
).LLM
field if you have more than one LLM GameObjects.You can also adjust the LLM and character settings according to your preference (see Options).
In your script you can then use it as follows ๐ฆ:
You can also specify a function to call when the model reply has been completed.
This is useful if the Stream
option is enabled for continuous output from the model (default behaviour):
To stop the chat without waiting for its completion you can use:
That's all โจ!
You can also:
To build an Android app you need to specify the IL2CPP
scripting backend and the ARM64
as the target architecture in the player settings.
These settings can be accessed from the Edit > Project Settings
menu within the Player > Other Settings
section.
It is also a good idea to enable the Download on Build
option in the LLM GameObject to download the model on launch in order to keep the app size small.
To automatically save / load your chat history, you can specify the Save
parameter of the LLMCharacter to the filename (or relative path) of your choice. The file is saved in the persistentDataPath folder of Unity. This also saves the state of the LLM which means that the previously cached prompt does not need to be recomputed.
To manually save your chat history, you can use:
and to load the history:
where filename the filename or relative path of your choice.
The last argument of the Chat
function is a boolean that specifies whether to add the message to the history (default: true):
For this you can use the async
/await
functionality:
You can use a remote server to carry out the processing and implement characters that interact with it.
Create the server
To create the server:
LLM
script as described aboveRemote
option of the LLM
and optionally configure the server parameters: port, API key, SSL certificate, SSL keyAlternatively you can use a server binary for easier deployment:
windows-cuda-cu12.2.0
.Create the characters
Create a second project with the game characters using the LLMCharacter
script as described above. Enable the Remote
option and configure the host with the IP address (starting with "http://") and port of the server.
The Embeddings
function can be used to obtain the emdeddings of a phrase:
A detailed documentation on function level can be found here:
The Samples~ folder contains several examples of interaction ๐ค:
To install a sample:
Window > Package Manager
LLM for Unity
Package. From the Samples
Tab, click Import
next to the sample you want to install.The samples can be run with the Scene.unity
scene they contain inside their folder.
In the scene, select the LLM
GameObject and click the Download Model
button to download a default model or Load model
to load your own model (see LLM model management).
Save the scene, run and enjoy!
LLM for Unity implements a model manager that allows to load or download LLMs and ship them directly in your game.
The model manager can be found as part of the LLM GameObject:
You can download models with the Download model
button.
LLM for Unity includes different state of the art models built-in for different model sizes, quantised with the Q4_K_M method.
Alternative models can be downloaded from HuggingFace in the .gguf format.
You can download a model locally and load it with the Load model
button, or copy the URL in the Download model > Custom URL
field to directly download it.
If a HuggingFace model does not provide a gguf file, it can be converted to gguf with this online converter.
The chat template used for constructing the prompts is determined automatically from the model (if a relevant entry exists) or the model name.
If incorrecly identified, you can select another template from the chat template dropdown.
Models added in the model manager are copied to the game during the building process.
You can omit a model from being built in by deselecting the "Build" checkbox.
To remove the model (but not delete it from disk) you can click the bin button.
The the path and URL (if downloaded) of each added model is diplayed in the expanded view of the model manager access with the >>
button:
You can create lighter builds by selecting the Download on Build
option.
Using this option the models will be downloaded the first time the game starts instead of copied in the build.
If you have loaded a model locally you need to set its URL through the expanded view, otherwise it will be copied in the build.
โ Before using any model make sure you check their license โ
Show/Hide Advanced Options
Toggle to show/hide advanced options from belowLog Level
select how verbose the log messages areUse extras
select to install and allow the use of extra features (flash attention and IQ quants)Remote
select to provide remote access to the LLMPort
port to run the LLM server (if Remote
is set)Num Threads
number of threads to use (default: -1 = all)Num GPU Layers
number of model layers to offload to the GPU. If set to 0 the GPU is not used. Use a large number i.e. >30 to utilise the GPU as much as possible. Note that higher values of context size will use more VRAM. If the user's GPU is not supported, the LLM will fall back to the CPUDebug
select to log the output of the model in the Unity EditorParallel Prompts
number of prompts / slots that can happen in parallel (default: -1 = number of LLMCharacter objects). Note that the context size is divided among the slots.If you want to retain as much context for the LLM and don't need all the characters present at the same time, you can set this number and specify the slot for each LLMCharacter object. e.g. Setting Parallel Prompts
to 1 and slot 0 for all LLMCharacter objects will use the full context, but the entire prompt will need to be computed (no caching) whenever a LLMCharacter object is used for chat.
Dont Destroy On Load
select to not destroy the LLM GameObject when loading a new SceneAPI key
API key to use to allow access to requests from LLMCharacter objects (if Remote
is set)Load SSL certificate
allows to load a SSL certificate for end-to-end encryption of requests (if Remote
is set). Requires SSL key as well.Load SSL key
allows to load a SSL key for end-to-end encryption of requests (if Remote
is set). Requires SSL certificate as well.SSL certificate path
the SSL certificate used for end-to-end encryption of requests (if Remote
is set).SSL key path
the SSL key used for end-to-end encryption of requests (if Remote
is set).Download model
click to download one of the default modelsLoad model
click to load your own model in .gguf formatDownload on Start
enable to downloaded the LLM models the first time the game starts. Alternatively the LLM models wil be copied directly in the buildContext Size
size of the prompt context (0 = context size of the model)This is the number of tokens the model can take as input when generating responses. Higher values use more RAM or VRAM (if using GPU).
Download lora
click to download a LoRA model in .gguf formatLoad lora
click to load a LoRA model in .gguf formatBatch Size
batch size for prompt processing (default: 512)Model
the path of the model being used (relative to the Assets/StreamingAssets folder)Chat Template
the chat template being used for the LLMLora
the path of the LoRAs being used (relative to the Assets/StreamingAssets folder)Lora Weights
the weights of the LoRAs being usedFlash Attention
click to use flash attention in the model (if Use extras
is enabled)Base Prompt
a common base prompt to use across all LLMCharacter objects using the LLMShow/Hide Advanced Options
Toggle to show/hide advanced options from belowLog Level
select how verbose the log messages areUse extras
select to install and allow the use of extra features (flash attention and IQ quants)Remote
whether the LLM used is remote or localLLM
the LLM GameObject (if Remote
is not set)Hort
ip of the LLM server (if Remote
is set)Port
port of the LLM server (if Remote
is set)Num Retries
number of HTTP request retries from the LLM server (if Remote
is set)API key
API key of the LLM server (if Remote
is set)Save
save filename or relative pathIf set, the chat history and LLM state (if save cache is enabled) is automatically saved to file specified.
The chat history is saved with a json suffix and the LLM state with a cache suffix.
Both files are saved in the persistentDataPath folder of Unity.
Save Cache
select to save the LLM state along with the chat history. The LLM state is typically around 100MB+.Debug Prompt
select to log the constructed prompts in the Unity EditorPlayer Name
the name of the playerAI Name
the name of the AIPrompt
description of the AI roleStream
select to receive the reply from the model as it is produced (recommended!).Load grammar
click to load a grammar in .gbnf formatGrammar
the path of the grammar being used (relative to the Assets/StreamingAssets folder)Cache Prompt
save the ongoing prompt from the chat (default: true)Saves the prompt while it is being created by the chat to avoid reprocessing the entire prompt every time
Slot
slot of the server to use for computation. Value can be set from 0 to Parallel Prompts
-1 (default: -1 = new slot for each character)Seed
seed for reproducibility. For random results every time use -1Num Predict
maximum number of tokens to predict (default: 256, -1 = infinity, -2 = until context filled)This is the maximum amount of tokens the model will maximum predict. When N tokens are reached the model will stop generating. This means words / sentences might not get finished if this is too low.
Temperature
LLM temperature, lower values give more deterministic answers (default: 0.2)The temperature setting adjusts how random the generated responses are. Turning it up makes the generated choices more varied and unpredictable. Turning it down makes the generated responses more predictable and focused on the most likely options.
Top K
top-k sampling (default: 40, 0 = disabled)The top k value controls the top k most probable tokens at each step of generation. This value can help fine tune the output and make this adhere to specific patterns or constraints.
Top P
top-p sampling (default: 0.9, 1.0 = disabled)The top p value controls the cumulative probability of generated tokens. The model will generate tokens until this theshold (p) is reached. By lowering this value you can shorten output & encourage / discourage more diverse outputs.
Min P
minimum probability for a token to be used (default: 0.05)The probability is defined relative to the probability of the most likely token.
Repeat Penalty
control the repetition of token sequences in the generated text (default: 1.1)The penalty is applied to repeated tokens.
Presence Penalty
repeated token presence penalty (default: 0.0, 0.0 = disabled)Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Frequency Penalty
repeated token frequency penalty (default: 0.0, 0.0 = disabled)Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Tfs_z
: enable tail free sampling with parameter z (default: 1.0, 1.0 = disabled).Typical P
: enable locally typical sampling with parameter p (default: 1.0, 1.0 = disabled).Repeat Last N
: last N tokens to consider for penalizing repetition (default: 64, 0 = disabled, -1 = ctx-size).Penalize Nl
: penalize newline tokens when applying the repeat penalty (default: true).Penalty Prompt
: prompt for the purpose of the penalty evaluation. Can be either null
, a string or an array of numbers representing tokens (default: null
= use original prompt
).Mirostat
: enable Mirostat sampling, controlling perplexity during text generation (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0).Mirostat Tau
: set the Mirostat target entropy, parameter tau (default: 5.0).Mirostat Eta
: set the Mirostat learning rate, parameter eta (default: 0.1).N Probs
: if greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)Ignore Eos
: enable to ignore end of stream tokens and continue generating (default: false).The license of LLM for Unity is MIT (LICENSE.md) and uses third-party software with MIT and Apache licenses. Some models included in the asset define their own license terms, please review them before using each model. Third-party licenses can be found in the (Third Party Notices.md).