8 vCPUs and 16 GB of RAM:
- phi3:14b-medium-128k-instruct-q4_1: 2.5930
- phi3.5:3.8b: 8.1542
- phi3.5:3.8b-mini-instruct-fp16: 5.2317
- gemma2:9b-instruct-q8_0: 3.2745
- llama3.1:latest: 5.6714
- llama3.1:8b-instruct-q8_0: 4.0183
- llama3.1:8b-text-q4_K_M: 5.6765
- llama3.1:8b-text-q8_0: 4.0403
- llama3.2:1b: 22.6293
- llama3.2:3b: 12.3215
- llama3.2:1b-text-q4_K_M: 25.0413
- finalend/hermes-3-llama-3.1:8b-q8_0: 4.0413
- phi3:14b-medium-4k-instruct-q4_1: 2.6379
- qwen2.5:7b-instruct-q5_0: 4.4067
- qwen2.5-coder:1.5b: 21.7418
- qwen2.5-coder:7b-instruct: 6.0470
- qwen2.5-coder:7b-instruct-q8_0: 4.2324
- deepseek-coder:6.7b: 8.2300
- deepseek-r1:1.5b: 29.7842