Google's TPU v8 Splits Into Two: Training vs. Inference in the Agentic Era
Google just announced TPU v8, but instead of one chip, they're shipping two: v8T for training and v8I for inference. Here's why the bifurcation matters for AI's next phase.
2 posts
Google just announced TPU v8, but instead of one chip, they're shipping two: v8T for training and v8I for inference. Here's why the bifurcation matters for AI's next phase.
Hugging Face just shipped MLX support in Transformers, letting you run models natively on Apple Silicon with zero code changes. It's the PR we all wanted to write.