Replies: 44 comments 2 replies
-
is there any discussions? i think it is a really useful feature if we can use github copilot in helix and i will choose switch from goland to helix to get my work done |
Beta Was this translation helpful? Give feedback.
-
There are no plans to have copilot in the editor core, so this will have to wait till there is proper plugin support. |
Beta Was this translation helpful? Give feedback.
-
thanks for the info Sudormrfbin. If something changes please let us know, it would be a very useful feature! In the meanwhile, it is possible to run not as good but could help |
Beta Was this translation helpful? Give feedback.
-
Waiting on this feature to move from neovim to helix for good |
Beta Was this translation helpful? Give feedback.
-
Has anyone tried integrating this copilot LSP? https://github.com/TerminalFi/LSP-copilot/blob/master/language-server/package.json#L4 |
Beta Was this translation helpful? Give feedback.
-
isnt github getting sued for how copilot takes data? I def dont think copilot should be anywhere near core |
Beta Was this translation helpful? Give feedback.
-
Is there any update on this ? |
Beta Was this translation helpful? Give feedback.
-
Hi, this needs a plugin system in the first place, which is not the case The plugin system is currently being prototyped, and might take a long while before it is done. The current prototype might even be scrapped if it turns out the tech is not as proper as other options A plugin system is quite a big endeavor so don't wait on that That said, the maintainers are quite good and productive so it will get done, at some point PS: I also miss this feature a lot and been using vscode when i have to use it |
Beta Was this translation helpful? Give feedback.
-
This seems like something we can solve with LSP integration instead of the plugin system. I believe that's how folks are using it in Sublime https://forum.sublimetext.com/t/github-copilot-for-sublime-text-4-is-coming/64449/3 |
Beta Was this translation helpful? Give feedback.
-
Was anyone able to use the copilot LSP successfully? |
Beta Was this translation helpful? Give feedback.
-
With Copilot X's recent announcement i had to look this up for Helix, landing me here. The LSP idea sounds alright, but it would be nice to craft a deeper UX around Copilot. Are plugin-like APIs available in Helix to, perhaps, temporarily fork Helix and integrate a Copilot Plugin directly into the forked Helix binary? Ie write a plugin directly into fork of Helix proper, even though we only intend to migrate it to a Plugin asap? If Copilot X is useful (big if, heh) it would be nice to have a good experience in Helix for that. They have a NeoVim plugin, for comparison. |
Beta Was this translation helpful? Give feedback.
-
Also ended up here after the Copilot X announcement, FWIW. No idea if implementing it as just an LSP is technically feasible, but that would certainly be nice if so. |
Beta Was this translation helpful? Give feedback.
-
The way things are going with AI these days, some level of non-trivial support would eventually be necessary, but some aspects like documentation could also belong to the build system. Either way, I suspect we will see competitors, notably also fully open source, to Copilot-X in the coming years (or days, maybe). Also the feature set will evolve drastically. For these reasons I think both some sense of urgency and restraint is necessary at the same time. |
Beta Was this translation helpful? Give feedback.
-
A temporary solution I have for myself is a chatgpt window running a "bridge" script in devtools. It allows chatgpt to communicate with other apps through a locally running database/job queue. If it makes sense, I can try to make it into a plugin. |
Beta Was this translation helpful? Give feedback.
-
Not just copilot, there are now several similar services. Copilot, tab9, codeium, codegeex, (vim-ai just uses chatgpt), ... |
Beta Was this translation helpful? Give feedback.
-
Regardless the model size, the great thing with ollama is that projects like that makes it possible to easily pull the model of your choice, that fits your need and system capabilities while ensuring privacy. I wonder if there is already good LSPs out there for ollama that could be hooked into helix? Has anyone considered the llm-ls project? Just found it now, but it is supposed to work as an LSP and it can bridge over to LLM runners ollama being one of them. There has already been some interest in figuring out the LSP integration from @hemedani and @webdev23 on the issue huggingface/llm-ls#49 . Maybe there is someone from here that can help them out with some directions to get going? |
Beta Was this translation helpful? Give feedback.
-
It's possible to overwrite the openai endpoint of my language server here but if it doesn't follow the openai pattern it won't work, and I don't think ollama does. If people wanted support for ollama it would be fairly easy to add, but any time I've tried locally hosted models they've been pretty poor for this use case. |
Beta Was this translation helpful? Give feedback.
-
@leona thanks for sharing! Your project could be a nice template for running against ollama. When I tested Mistral 7B separately for code assistance in Rust it worked pretty well for me, at least better than without, but I would probably want to run the larger models for even better responses. I'm personally more into these open models mainly for privacy reasons, but anyhow your project could also be useful as is for the ones that that started the issue and where interested in having copilot running (sorry for hijacking the thread for ollama things by the way). Really nice with a running LSP plugin example, now I which I just had more time to try this out with ollama. If anyone gets started I'll definitely try to find some time to help out (but preferably in Rust in that case). |
Beta Was this translation helpful? Give feedback.
-
I'm not quite ready for relying on LLMs for coding: This is mistral 7B "An octahedron consists of eight vertices and eight triangular faces, with each face being made up of three vertices. Therefore, there are indeed a total of 8 x 3 = 24 individual vertices, but since a triangle is defined by three non-unique vertices, there are only 12 unique triangles in an octahedron." |
Beta Was this translation helpful? Give feedback.
-
Hi! I would like to share here my current workflow I found myself comfortable with. https://github.com/7flash/helix-chatgpt Notice, it works especially well with Warp Terminal, where I created a workflow/button to execute the script, but as well you can define bash function etc. What happens when you run the script, it opens a new file with helix where you can write your prompt, and I found it being more comofrtable than any existing UI.
Just sharing here what works for me, but there isn't good documentation yet, so please feel free to contribute. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone! @leona thanks for your effort in creating helix-gpt. I made an attempt to integrate with ollama. @tirithen may be you would be interested in having a try with it? Regarding llm-ls, I believe they are implementing the features as LSP custom methods, which helix built-in LSP client do not support currently. |
Beta Was this translation helpful? Give feedback.
-
@kyfanc I'm off for a trip for a while now without my GPU, but I'll give it a try once I'm back. Ideally in the end I think this sort of application should be a simple binary rather than a bun + typescript, to be fast and efficient, written in Rust or similar. But I also saw that it at least can build to a binary via bun (I suppose it means bundling and running a full V8 to run the lsp). Nice that there is some progress either way! :-) And also nice with the approach to support both cloud gpt providers and ollama so the users can choose what fits them best. :-) |
Beta Was this translation helpful? Give feedback.
-
@tirithen bun doesnt use v8 actually, it's JavaScript-core which has smaller size |
Beta Was this translation helpful? Give feedback.
-
So I really wanted to see this feature (albeit for Claude) but I'm actually seeing good results using aichat: https://github.com/sigoden/aichat Using this with the "insert-output" and "append-output" commands in Helix the output will write into Helix:
Can run it outside of Helix just as easily:
Using the shell_pipe feature you can also run aichat against highlighted text
Or this is also very easy using the command line by specifying the function:
Note that I've been verbose on purpose for demonstration purposes. The above commands can be reduced in length and there are hotkeys for the shell commands. |
Beta Was this translation helpful? Give feedback.
-
The benefit is when AIs get larger contexts and can understand compile_commands.json or compile_flags.txt And then you can just say, add test cases for the functions below, and add a bash script to drive it. Also write a bat file for windows like the other scripts. |
Beta Was this translation helpful? Give feedback.
-
Super cool tip! |
Beta Was this translation helpful? Give feedback.
-
Here is upgraded version: https://github.com/7flash/gurrai |
Beta Was this translation helpful? Give feedback.
-
I'd suggest modifying Continue. It's under the Apache License and is one of the most mature open-source AI plugins. Since it's also on VSCode and JetBrains it should be well abstracted. It supports a number of backends and features. Perhaps Helix can look into teaming with Continue sometime in the future? |
Beta Was this translation helpful? Give feedback.
-
For anyone looking for a terminal friendly alternative to cursor, one of my co-workers just shared https://aider.chat/ with me. You might be able to make a helix plugin / language server for it? 🤷♂️ |
Beta Was this translation helpful? Give feedback.
-
I really want to get some AI Coding assistant in Helix I tried Then I tried |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
All reactions