- Software-defined Compute
The hardware accelerator, developed based on 4Paradigm’s high-dimensional algorithm, can satisfy the AI algorithm’s professional performance requirements on the chip with a higher cost advantage. It is expected to improve the training performance by up to 10 times in the application scenarios.
- Software-defined Networks
Addresses the issue of AI’s communication efficiency in distributed machine learning environment, and achieves a breakthrough in the horizontal burst of data traffic bottleneck with the high-dimensional and distributed network communication protocol independently developed by 4Paradigm.
- Software-defined Storage
Performs AI at close to full memory for performance training and helps the AI to perform critical storage, read and write functions while undergoing the necessary machine learning and self-learning training process. This has greatly improved the speed of AI evolution.
The high-dimensional machine learning engine can support up to a trillion dimensions, and can greatly improve the forecasting effect of the model through the ultimate machine computing power.
Achieve accurate decision-making at millisecond-level for trillions of dimensions, improving business efficiency in real time.
Supports business system in self-learning and constant self-iteration from the closed-loop of data introduction, model training and model launching, becoming an AI application that reacts to the rapid changes of business without the need for manual intervention
The AI Methodology was established based on the ‘Kolb's Learning Cycle’ theory. It saves AI development time and improves efficiency through four standard steps.
Data Governance for AI
Able to access PB-level and even larger amounts of logs, supports real-time storage, and forms a closed-loop for online data collection and processing.
Sufficient Computing Power
The ultra-high computing power increases the performance of SageOne's single server by 4-8 times on average, and reduces the TCO by 50% in single AI application scenario, lowering the difficulty level in the application of AI.
High-performance Integrated Software and Hardware AI System
Depth Optimization of Integrated Software and Hardware
Maximizes the AI capabilities of the Sage software while maximizing the potential of the server, providing powerful performance support for distributed large-scale model training and high-throughput online forecasting, bringing AI processes with ultra-high-performance to enterprises. Upon comprehensive testing and verification, its performances in risk control, anti-fraud, anti-money laundering, intelligent marketing and other scenarios have increased by more than four times on average compared to general servers.
High-performance Server Configuration
Adopts second generation Intel® Xeon® Scalable Processors with up to 28 cores and 56 threads per processor
Uses Intel® Turbo Boost Technology 2.0, up to 4.4GHz.
Uses frequency-locking technology, locking the frequency at 2.8GHz while maintaining a high core count, ensuring the highest performance of feature engineering, model training, model forecasting, etc., maximizing the potential of the CPU.
Supports up to 24 DDR4 memory slots, up to 2933MHz memory frequency, and up to 3TB of maximum memory. Supports Intel® Optane DC Persistent Memory (DCPMM).
The self-developed AI computing accelerator card uses Intel A10 chip and the latest OpenCL language to write data compression algorithm and model training algorithm. In comparison with GZip, its compression speed has increased significantly, with a 20-30% improvement in performance as compared to the GBM algorithm in CPU. It can perform acceleration and resource optimization for the entire AI process, as well as performing feature placement computing and model compression concurrently, largely improves the performance of storage and network transmission. Further cost-effective performance enhancement is carried out by deep performance optimization of self-developed GDBT, self-developed DSN, GBM, DNN and other machine learning algorithms, with independent intellectual property rights, customized board drivers, and supports distributed cluster environment.
100GE Network Acceleration
As a large number of tasks, data distributed communication processes are involved in the process of feature engineering and training of distributed machine learning, the network I/O between server instances usually becomes the performance bottleneck. Therefore, SageOne Advanced proposes a software-defined communication protocol and adjusts the configuration code for this protocol in the software by adopting the 100GE/RoCE network. It uses remote direct memory access through the Ethernet, and reduces CPU usage during high-volume data transmission with high-speed network and thus addressing the issue of Network I/O bottlenecks during the AI process.
High-performance Storage Tuning
For the configured Intel high-performance SSD, SageOne Advanced's integrated Sage software has carried out customized adjustments for the SSD usage in feature processing, data placement, etc. The version of Spark and other components have also been upgraded and tuned based on the standard version of Sage, so that the software and hardware can integrate well on SageOne Advanced.
Hardware Performance Optimization
Self-designed compute acceleration card, increasing training performance by 10 times
Software-defined Communication Protocol
Data access from memory directly to memory, with 0 data delay
Optimizes machine learning IO performance, improving throughput and reducing launching costs
Deployment Operation and Maintenance
One-stop Software and Hardware Delivery of the Rack Server
SageOne SageOne Advanced adopts one-stop delivery method to provide customers with full-process delivery services from server to network equipment, software deployment to function testing, as well as providing a one-stop solution for customer's hardware preparation, network configuration, resource and topology planning, component compatibility, software deployment and other system building difficulties. This method shortens the overall project cycle and enables enterprises to achieve the out-of-the-box AI capabilities. It helps industries and enterprises to quickly apply artificial intelligence, and opens up high-speed channels for intelligent upgrades for enterprises.
Horizontal Expansion and Reduction
It enables flexible and convenient adjustment for clusters according to the change in business volume, as well as horizontal and linear expansion or reduction.
Clustering Management, Operation and Maintenance
With the centralized and unified management of integrated clusters, it performs global real-time monitoring and operation and maintenance through a unified interface.