Zack Saadioui
8/26/2024
1 2[INFO] Ollama started successfully! [INFO] Loaded model: Llama3 with dynamic libraries CUDA, CPU.
1
2
plaintext
[2023-11-02 17:00:23] [INFO] POST request to /api/generate from 192.168.1.1 (Status: 200)1
2
plaintext
[INFO] GPU used: NVIDIA GTX 1080, Memory available: 4077 MB1
2
bash
cat ~/.ollama/logs/server.log1
~/.ollama/logs1
2
bash
journalctl -u ollama --no-pager1
2
bash
/usr/share/ollama/.ollama/logs/1
2
cmd
explorer %LOCALAPPDATA%\Ollama1
server.log1
server-#.log1
OVRCopyright © Arsturn 2025