Artificial intelligence and big data are the themes of our world in this era.


Every aspect of our life is directly or indirectly influenced by AI and big data. While the applications are in a broad range, covering almost all industries, the majority of the computations are performed in a handful of centralized cloud services provided by Amazon, Microsoft, Google, Alibaba, etc.

This reality not only put data security of users at risk, but also limits the growth of AI in our societies. One would expect that the future of AI lies in a decentralized scenario in which models are trained more locally, and data and models can be shared without concerns of security.

Fortunately, the mathematical nature of artificial neural networks allows their training to be easily parallelized (thus distributed) in obtaining large models or those trained on large data. This basically sets limitless potential of deep learning for distributed training by end users.

Besides deep learning, most other machine learning models such as Random Forest, LASSO, etc., can also be parallelized. Therefore, the decentralized framework of Federated Learning can become the mainstream in the future of the training of machine learning models.

Federated Learning Cycles

Federated Learning Cycles

The main structure of the framework for VisionX parallelized computing
is going to include 3 hierarchical layers:

  1. Central server (DBC GPU Server) - that does minimal computation

  2. The node layer (DBC Edge GPU Nodes) - that does most of the computation

  3. The End User layer (Users Electronic Devices) - that provides data