Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
KerrAvon
19 days ago
|
parent
|
context
|
favorite
| on:
Step 3.5 Flash – Open-source foundation model, sup...
Is there a reliable way to run MLX models? On my M1 Max, LM Studio seems to output garbage through the API server sometimes even when the LM Studio chat with the same model is perfectly fine. llama.cpp variants generally always just work.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: