Inference computing is critical in this new era of artificial intelligence, but energy and cost issues can plague companies trying to implement AI.
D-Matrix Corp., a computing platform that specifically manages AI inferencing workloads is determined to give customers more inference in less time, with less energy.
“We’re super excited … to announce the world’s most efficient AI computing accelerator for inference,” said Sid Sheth (pictured), founder and chief executive officer of d-Matrix. “We built this product with inference and inference only in mind. When we started the company back in 2019, we essentially looked at, the landscape of AI compute out there, and made a bet that inference computing would be the largest computing opportunity of our lifetime.”
Sheth spoke with theCUBE Research’s John Furrier at SC24, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the evolution of inference computing. (* Disclosure below.)
Solving the pain points of inference computing
D-Matrix has built a Peripheral Component Interconnect card …