3.5M Downloads Updated 2 years ago
ollama run tinyllama
curl http://localhost:11434/api/chat \ -d '{ "model": "tinyllama", "messages": [{"role": "user", "content": "Hello!"}] }'
from ollama import chat response = chat( model='tinyllama', messages=[{'role': 'user', 'content': 'Hello!'}], ) print(response.message.content)
import ollama from 'ollama' const response = await ollama.chat({ model: 'tinyllama', messages: [{role: 'user', content: 'Hello!'}], }) console.log(response.message.content)
Name
36 models
Size
Context
Input
tinyllama:latest
638MB · 2K context window · Text · 2 years ago
638MB
2K
Text
tinyllama:1.1b
TinyLlama is a compact model with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
Hugging Face
GitHub