
Key takeaways of zdnet
- Olama AI Devas have issued a native GUI for McOS and Windows.
- The new GUI is very simple using AI locally.
- The app is easy to install, and allows you to pull various LLMs.
If you use AI, there are many reasons that you would like to work with it locally instead of cloud.
First, it provides too much privacy. When using a large language model (LLM) in the cloud, you never know if your questions or results are being tracked or even saved by a third party. In addition, the use of LLM locally saves energy. The amount of energy required to use cloud-based LLM is increasing and may be a problem in the future.
Argo, locally hosted LLM.
Also: Local Deepsek AI to protect your privacy – 2 easy ways
Olama is a device that allows you to run separate LLM. I have been using it for some time and found it to simplify the process of downloading and using various models. Although this requires severe system resources (you won’t want to use it on aging machine), it moves fast, and allows you to use various models.
But Olama itself has been a command-line-key case. Some are third-party Gui (eg MSty, which is my Go-Two). So far, the developers behind Olama had not produced their own GUI.
It all recently turned, and is now a straight, user -friendly GUI, named Olama.
Common works with LLMS – but you can pull others
GUI is quite basic, but it is designed so that anyone can jump immediately and start using it. There is also a small list of LLM that can be easily drawn from the LLM drop-down list. Those models are quite common (such as Jemma, Dipsek and Quven models). Select one of those models, and Olama GUI will pull it for you.
If you want to use the model not listed, then you have to pull it from the command line like:
Olama bridge model
Where the model is the name of the model you want.
Also: How do I feed my files to a local AI for better, more relevant reactions
You can find a complete list of available models Olma library,
After pulling a model, it appears in the drop-down to the right of the query bar.
The olma app is equally easy to use as any cloud-based AI interface on the market, and it is free to use for Macos and Windows (sadly there is no Linux version of GUI).
I kicked the tires of the Olama app and found that, although it does not have the feature set of MSty, it fits and fits better with Macos beauty. The OLLAMA app also sounds slightly faster than MSty (in response to both opening and queries), which is a good thing because local AI can often be slightly slow (due to lack of system resources).
How to install Olama app on Mac or Windows
You are in luck, because installing the Olama app is as easy as installing any app on Macos or Windows. You just indicate your browser Olama download pageDownload the app for your OS, double-click the downloaded file, and follow the instructions. For example, on Macos, you draw the Olama app icon into the application folder, and you are.
Using Olama is equally easy: Select the model you want, download it, then querry.
Pulling LLM is as easy as choosing it from the list and let the app talk.
Jack Walons/ZDDNet
Should you try the Olama app?
If you are looking for a reason to try local AI, now is the right time.
Also: I tried the local AI app of SECTAM, and this is what I needed to keep my data private.
The Olama app makes it as easy to migrate away from cloud-based AI as it can get. The app is free to install and use, as the Olama library has LLMs. Give it a chance, and see if it does not become your Go-to AI tool.
Want more stories about AI? Check out AI LeaderboardOur weekly newspapers.