Etherist![]() Member ![]() Posts: 27 Joined: 6/9/2024 Location: South Australia ![]() | Hey there Beech Pilot You could download a local Large Language Model (LLM) and do a little OmniScript training of the model, check out https://huggingface.co/models. Install LMStudio from here: https://lmstudio.ai then download a 'Coding' LLM from HuggingFace.com. Potential 'Coding' LLM options (smallest to largest): * Qwen/CodeQwen1.5-7B-Chat (7 billion parameter) * THUDM/codegeex4-all-9b-GGUF (9 billion parameter) * DeepSeek-ai/DeepSeek-Coder-V2-Lite-Instruct (16 billion parameter) Choose a Quantization size that is Q5_K_M or larger, preferably Q6_K_L. Note: you will need a PC with reasonable grunt, as these LLMs need a bit of horsepower to run locally, i.e. relatively newish Gaming machine (CPU 16 core/32 thread and 8Gb+ Graphics card), with min 16Gb ram but preferably 32Gb+. [Edited by Etherist on 9/20/2024 7:01 PM] |