I'm using Raycast AI (paid subscription) to interact with AI models through the use of hotkeys. It can even act upon selected text.
I've got several hotkeys set up:
- translate (selected text) to English/Spanish/Italian (3 hotkeys)
- fix (selected text) English/Spanish/Italian grammar (3 hotkeys)
- add (selected text) to current chat prompt where I can continue typing
There's also a built-in "Summarize webpage" AI action that I assigned a hotkey to and I can get a nice summary of the currently opened webpage without having to read the whole article.
These save me so much time and help me better my language skills. Especially when fixing grammar. When fixing grammar, it shows you the diff so you can see what you wrote incorrectly and how to write it correctly.
It also allows you to paste the text, from the app window into the last used app simply by pressing ENTER. I also use this a lot.
Slightly off topic: some of the tools in HuggingChat don't work, even one of the official ones (Image Editor):
Error calling tool Image Editor
Parameters
image: output_2_0.webp
prompt: add a smiling emoji
Error
An error occurred while calling the tool edit_image: No API found
Error calling tool TTS (fast)
Parameters
text: hello
description: Laura's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.
Error
An error occurred while calling the tool gen_tts_fast: Error: Error: Could not resolve app config.
Yes but you now have the ability to run queries with a shortcut from your Mac, as well as switch to a local model using a dedicated keyboard shortcut. See demo: https://x.com/cyrilzakka/status/1838618605648490974
Congrats! I noticed it was initially closed-source and now it's opened, nice! Is it a project funded by HuggingFace? (Not immediately clear to me).
I've built a macOS assistant too (more advanced though), with focus on privacy and easy of use (https://getfluid.app). I'd love to open-source it, but not sure about sustainability of such business model. Right now I'm experimenting with a fully private paid Llama hosting (for GPU poors).
I'm using Raycast AI (paid subscription) to interact with AI models through the use of hotkeys. It can even act upon selected text. I've got several hotkeys set up: - translate (selected text) to English/Spanish/Italian (3 hotkeys) - fix (selected text) English/Spanish/Italian grammar (3 hotkeys) - add (selected text) to current chat prompt where I can continue typing
There's also a built-in "Summarize webpage" AI action that I assigned a hotkey to and I can get a nice summary of the currently opened webpage without having to read the whole article.
These save me so much time and help me better my language skills. Especially when fixing grammar. When fixing grammar, it shows you the diff so you can see what you wrote incorrectly and how to write it correctly.
It also allows you to paste the text, from the app window into the last used app simply by pressing ENTER. I also use this a lot.
Just some feedback.
Id personally be interested to know how this compares to a solution such as Ollama, which I have been using to run local models.
Linux users have a pretty excellent native Ollama client called Alpaca, it's well worth checking out: https://flathub.org/apps/com.jeffser.Alpaca
It's still early in development but supports multimodal and some nice looking code blocks.
Slightly off topic: some of the tools in HuggingChat don't work, even one of the official ones (Image Editor):
first impression: it's slick, looks a bit rough but promising, looking forward to using this and hope it improves.
What does it do? Isn’t HuggingChat already available as a dedicated web app (https://huggingface.co/chat/)?
Yes but you now have the ability to run queries with a shortcut from your Mac, as well as switch to a local model using a dedicated keyboard shortcut. See demo: https://x.com/cyrilzakka/status/1838618605648490974
Sorry, this link is not accessible for me. It would be nice to add some more information to the repo’s readme.
@isodev sorry about that! Just uploaded a video to the README
Would it be possible to let you run local models without an account?
Interesting. I am an ai enthusiast but I don’t understand exactly what your product does. Run locally ? Run remote on HF? Would love more info.
Both! You have the option of using HF’s HuggingChat or switching to a local model using a keyboard shortcut
Congrats! I noticed it was initially closed-source and now it's opened, nice! Is it a project funded by HuggingFace? (Not immediately clear to me).
I've built a macOS assistant too (more advanced though), with focus on privacy and easy of use (https://getfluid.app). I'd love to open-source it, but not sure about sustainability of such business model. Right now I'm experimenting with a fully private paid Llama hosting (for GPU poors).
Good luck :)
Please post screenshots or video
Should definitely be added to the README but here’s a demo: https://x.com/cyrilzakka/status/1838618605648490974?s=46