
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Key Takeaways
- If you want to use an agentic browser, consider native AI.
- Local AI puts less strain on the power grid.
- This approach keeps your queries on your local system.
Agents are storming the gates of the browser castle, and it looks like things are heating up for another browser war; Only this time with ‘smart’ devices.
From my perspective, that conflict is going to cause a big problem. Imagine that everyone around the world is using an agentic web browser. Those agentic actions can take serious energy, causing electricity prices to skyrocket and having profound negative impacts on the climate.
There is a solution to this challenge: local AI.
Also: Opera Agentic browser rolling out to Neon users – how to join the waiting list
On the rare occasions when I need to use AI, I always do so locally, exclusively using Olama.
Unfortunately, all but one agent browser on the market uses cloud-based AI. To me, this approach makes using those agentic browsers a no-go. Not only do I not like the idea of putting further strain on the electric grid, but I would prefer to keep all my queries local so that no third party can use them for training or profiling.
I’ve found two agentive browsers that can work with native AI: BrowserOS and Opera Neon. Unfortunately, only one of them is currently available to the public, BrowserOS.
BrowserOS is available for Linux, MacOS and Windows. To use it with locally installed AI, you need to install Olama and download a model that supports agentive browsing, such as qwen2.5:7b.
Also: I’m Testing the Top AI Browsers – Here’s Which Browsers Really Impressed Me
I’ve been testing BrowserOS and have found it to be a solid entry into the agentive browser market. In fact, I’ve found that it can stand head-to-head with browsers that rely on cloud-based AI, without any negative effects or privacy issues.
Once I had BrowserOS set up to work with Olama (more on that in a bit), I opened the Agentic Assistant and ran the following query: Open amazon.com and search for a wireless charging stand that supports the Pixel 9 Pro.
It took a while to get everything set up properly so that BrowserOS’s agentive tool could work, but once I got it working, it worked perfectly.
I will warn you: using BrowserOS in this way requires quite a bit of system resources, so if your computer is underpowered, it may struggle to perform.
Also: Top 20 AI tools of 2025 – and the #1 thing to remember when using them
According to the Olama site, the minimum RAM requirements to run local AI are:
- Minimum (8GB): This is the absolute minimum requirement to get started and will allow you to run smaller models, typically in the 3B to 7B parameter range.
- Recommended (16-32GB): For a better experience and the ability to run the more capable 13b model, 16GB RAM is recommended. To handle the 30B+ model comfortably, you should aim for at least 32GB of RAM.
- Large models (64GB+): To run the largest and most powerful models, such as the 70B parameter variant, you’ll need 64GB of RAM or more.
From experience, minimum RAM will not work. My System76 Thelio desktop has 32GB of RAM, and that setup works fine for my purposes. If you want to use larger LLMs (or you want more speed in your agentic use case), I would go with 64GB+. Even at 32GB, agent tasks can be slow, especially when running other apps and services.
Also: Are AI browsers worth the security risk? Why are experts worried?
With enough resources, BrowserOS will carry out your agentic tasks.
But how do you get there? let me show you.
I’m assuming you’ve installed BrowserOS on your platform of choice.
Establishing the Olama
Because Olama can be easily installed on both macOS and Windows by downloading the binary installer Olama download pageI am going to show you how to install it on Linux.
curl -fssl | sh
Once the installation is complete, you can download a model that supports the agentive browser. We’ll go with qwen2.5:7b. To draw that model, issue the command:
Olama Pul Quen2.5:7B
After the model pull is finished, it’s time to configure BrowserOS to use it.
Configure BrowserOS
Let’s configure BrowserOS to use Olama.
1. Open BrowserOS AI Settings
Open the BrowserOS app and then point to:
chrome://settings/browsers-ai
2. Select Olama
In the resulting window, click the Use button associated with Olama. Once you’ve completed that step, you’ll need to configure Olama as follows:
- Provider Type – Olama
- Provider Name – Olama Quen
- Base URL – Leave this as is, unless you are using Olama on a separate server within your LAN, in which case switch 127.0.0.1 with the IP address of the hosting server
- Model ID – qwen2.5:7b
- Reference window size – 12800 (but only if you have a powerful system; otherwise, go with the smaller number)
- Temperature – 1
Also: I tested all of Edge’s new AI browser features — and it felt like I had a personal assistant
Make sure you have selected the correct model and increased the size of the context window.
Screenshot by Jack Wallen/ZDNET
Make sure you set the new provider as the default.
3. Stop and Start Olama Service
To make BrowserOS work with Olama, you must first stop the Olama service. To do this on Linux, the command would be:
sudo systemctl stop olma
Once the service is stopped, you need to start it by enabling CORS (Cross-Origin Resource Sharing). To do this on Linux and MacOS, run the command:
OLLAMA_ORIGINS=’*’ Ollama Service
To perform this step with Windows PowerShell, the command would be:
$env:OLLAMA_ORIGINS=’*’; olama service
To do this from the Windows command line, run the following command:
OLLAMA_ORIGINS=* && set ollama serve
Using a New Provider
At this point, you can open the agent side panel in BrowserOS and run your agentic query, as I suggested above: Open amazon.com and find a wireless charging stand that supports the Pixel 9 Pro.
Too: I let ChatGPAT Atlas do Walmart shopping for me – here’s how the AI browser agent did it
BrowserOS will do its thing and eventually open a tab with the above search terms.
BrowserOS is working with Olama to find a new charging stand for my Pixel 9 Pro.
Screenshot by Jack Wallen/ZDNET
It takes some time to get used to working with the agentive browser, but once you get the hang of it, you’ll find that it can be quite helpful. And by using their own locally installed service, you can feel a little less guilty about using AI.

