Micron Puts SSD into AI Mix
Gregory Wong, principal analyst with Forward Insights, said the 9300 series could tackle two AI-related workloads — the machine learning and the inferencing. With the former, it might be churning through large data sets where there are “tons and tons” of faces to learn from. The facial recognition is the inferencing part, and that’s an example where the AI workload might need to happen at the edge. “The machine has already learned from this huge data set and now they have to put it to use,” he said, and if it’s a surveillance application, you want that inference made quickly and on-site — the system can’t be querying the cloud to do face matching.
Gary Hilson, EE | Times
Wong said the capacity and very low latency of the Micron 9300 series is what’s necessary for AI workloads, although the latency falls short of what an Intel Optane-based SSD can provide. But given that Optane is price-prohibitive for large data sets, the Micron offering makes a lot of sense when there’s lots of data to churn through.
The high capacity could be a drawback in some instances, in that it would take a long time to rebuild if it ran into problems — you wouldn’t want an excessively large SSD to be doing the inference in an autonomous vehicle, Wong said, and you can’t be shipping data to the cloud and back to make real-time decisions, even with 5G networks, as there’s always potential for a lag or even an outage. “The car has to be able to react because it’s not just the latency, it's also the quality of the connection.”
It will be several years before we’re at that level of autonomous driving on the roadways, and in the meantime, said Wong, there’s plenty of AI workloads that need to be addressed by various memory and storage technologies, including SSDs, both in the data center and at the edge in scenarios where an immediate inference is required.