I got an error after command 3. I had to do 'ollama serve' first. I am not sure it is necessary a new step. 3. Pull the model: ollama pull llama3:8b 4. Run the model: ollama run llama3:8b