IoT Accelerator As a Solution & Service

I want GPU for 3 hours but don't want to pay and transfer everything to expensive cloud providers or buy a super expensive computer

Alex Aman
DataDrivenInvestor

--

Photo by Alexandre Debiève on Unsplash

These days many people want to know about good startup ideas, so here is one to steal:

A new computing paradigm that combines two architectures many-core accelerator with multi-core processors having powerful and general-purpose cores that should exist in any existing Cloud platform or personal laptop

GPUs are expensive but quite useful.GPUs have a massive number of simple cores which accelerate algorithms with a high degree of data parallelism, whereas multi-core processor designs aim at reducing latency in sequential programs by using sophisticated control logic and large cache memories.

GPU as a service can be a cheap option to use the power of GPUs in existing Architecture which can seek to boost the execution throughput of parallel applications with thousands of simple cores and a high memory bandwidth

So if one can set up an Architecture of physical GPU as explained below, can help multiple companies that may need to use GPU but cannot afford to buy GPU laptops or Cloud service for their employees. It could be a great idea for a business as the demand of computational power to do nonlinear task quickly for example-Investment Trading, Bruteforce attack monitoring, etc one can use the power of GPUs on demand

Physical GPU Center interacting with Local computer

Each Computer has a Parallel bus that interacts with its components and just by expanding that parallel bus over PCI-express, A local computer can interact with GPUs.With MMIO (Memory Mapped input Output) used by various devices like Headphones, speakers, etc a Local computer can be aware of an external component i.e GPUs available on another part of the city /country/continent ready to interact via PCI-e

And with the simple technology of Direct Memory Access (DMA) used by pen drives, GPUs can directly interact with memory and start computing the processes using Memory directly instead of sending requests to the Local CPU,

Extension of Local Client Computer to Physical GPUs via API

Plus all this Architecture can have an API layer, so anyone can use it via VMs, Jupyter notebooks, cloud, etc.

Thus in the world of the ever-expanding need for quick computational power, a simple integration of the GPU layer ready to integrate as a service can be very helpful to many businesses and a great startup idea to make GPU a service.

Subscribe to DDIntel Here.

Visit our website here: https://www.datadriveninvestor.com

Join our network here: https://datadriveninvestor.com/collaborate

--

--