this post was submitted on 29 Sep 2024
36 points (84.6% liked)

Godot

5909 readers
1 users here now

Welcome to the programming.dev Godot community!

This is a place where you can discuss about anything relating to the Godot game engine. Feel free to ask questions, post tutorials, show off your godot game, etc.

Make sure to follow the Godot CoC while chatting

We have a matrix room that can be used for chatting with other members of the community here

Links

Other Communities

Rules

We have a four strike system in this community where you get warned the first time you break a rule, then given a week ban, then given a year ban, then a permanent ban. Certain actions may bypass this and go straight to permanent ban if severe enough and done with malicious intent

Wormhole

[email protected]

Credits

founded 1 year ago
MODERATORS
 

video descriptionThe video shows the Godot code editor with some unfinished code. After the user presses a button offscreen, the code magically completes itself, seemingly due to an AI filling in the blanks. The examples provided include a print_hello_world function and a vector_length function. The user is able to accept and decline the generated code by pressing either tab or backspace

This is an addon I am working on. It can help you write some code and stuff.

It works by hooking into your local LLMs on ollama, which is a FOSS way to run large language models locally.

Here's a chat interface which is also part of the package

video descriptionThe video shows a chat interface in which the user can talk to a large language model. The model can read the users code an answer questions about it.

Do you have any suggestions for what I can improve? (Besides removing the blue particles around the user text field)

Important: This plugin is WIP and not released yet!

top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 1 month ago (1 children)
[–] [email protected] 5 points 1 month ago

Ollama is really great. The simplicity of it, the easy use via REST API, the fun CLI...

What a fun program.

[–] [email protected] 4 points 1 month ago (1 children)

Is there somewhere we can follow for updates?

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago)

I will likely post on here when I release the plugin to GitLab and the AssetLib.

But I also don't want to spam this community, so there won't be many, if any updates until the actual release.

If you want to have something similar right now, there is Fuku for the chat interaction and selfhosted copilot for code completion on the AssetLib! I can't get the code completion one to work, but Fuku works pretty well, but can't read the users code at all.

I will upload the files to my GitLab soon though.

EDIT: Updates the gitlab link to actually point to my gitlab page

[–] [email protected] 3 points 1 month ago

What model are you using with Ollama?

Very interested to give this a try when if you release it.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Just fixed the problem where it inserts too many lines after completing code.

This issue can be seen in the first demo video with the vector example. There are two newlines added for no reason. That's fixed now:

[–] [email protected] 2 points 1 month ago (1 children)

I've been looking for a plugin like this on and off for a few weeks now. Imo the chat feature is overrated, I just want the auto complete

[–] [email protected] 1 points 1 month ago (1 children)

Currently the completion is implemented via keyboard shortcut.

Would you prefer it, if I made it automatically complete the code? I personally feel, that intentionally asking for it to complete the code is more natural than waiting for it to do so.

Are there some other features you would like to see? I am currently working on a function-refactoring UI.

[–] [email protected] 2 points 1 month ago (1 children)

Completed via a keyboard shortcut is perfect.

As far as other features I want; I don't want any. I just want code completion via keyboard shortcut.

I think a hard aspect is figuring out what context to feed the LLM. Iirc GitHub Copilot only feeds what is in the current file, above the cursor, but I think feeding the whole file + other open code tabs would be super useful.

[–] [email protected] 2 points 1 month ago (1 children)

You are right in that it can be useful to feed in all of the contents in other related files.

However!

LLMs take a really long time before writing anything with a large context input. the fact that githubs copilot can generate code so quickly even though it has to keep the entire code file in context is a miracle to me.

Including all related or opened GDScript files would be way too much for most models and it would likely take about 20 seconds for it to actually start generate some code (also called first token lag). So I will likely only implement the current file into the context window, as that might already take some time. Remember, we are running local LLMs here, so not everyone has a blazingly fast GPU or CPU (I use a GTX1060 6GB for instance).

ExampleI just tried it and it took a good 10 seconds for it to complete some 111 line code without any other context using this pretty small model and then about 6 seconds for it to write about 5 lines of comment documentation (on my CPU). It takes about 1 second with a very short script.

You can try this yourself using something like HuggingChat to test out a big context window model like Command R+ and fill its context windw with some really really long string (copy paste it a bunch times) and see how it takes longer to respond. For me, it's the difference between one second and 13 seconds!

I am thinking about embedding either the current working file, or maybe some other opened files though, to get the most important functions out of the script to keep context length short. This way we can shorten this first token delay a bit.

This is a completely different story with hosted LLMs, as they tend to have blazingly quick first token delays, which makes the wait trivial.

[–] [email protected] 1 points 1 month ago

Makes sense. Glad anything will exist at all though!