OK, that was funny.

I tested out Ollama's newest model, llama3.2 to run an AI chat locally on my PC.

First I asked it a few questions about Python and databases and was pleasantly surprised at the quality of responses.

Then I asked it if it can access the Internet. Ollama said yes. So I asked it to summarize one of my blog posts. The answer was very generic. Turns out Ollama did not access my blog, just guessed the answer based on the URL. Actually, Ollama cannot access the Internet at all.

1
Share
Share on Mastodon
Share on Twitter
Share on Facebook
Share on Linkedin
stfn :raspberrypi: :python:

All in all, I am impressed at what it can do, using less than 5GB of VRAM, and providing long answers (several paragraphs) in a few seconds, all that on my rather old RTX2060.

1
2mo
Replies