Ease of Use

Ease of Use

aiDAPTIV+ allows you to spend
more time training your data,
not your team of engineers.

Cost and Accessibility

Cost and AccessibilityValue

Phison’s aiDAPTIV+ leverages cost-effective NAND
flash to increase access to large-language model
(LLM) training with commodity workstation hardware.

Privacy

Privacy

aiDAPTIV+ workstations allows
you to retain control of your
data and keep it on premises.

Streamlined Scaling for Data Model Training

aiDAPTIV+ is the ultimate turnkey solution for organizations to train large data models without additional staff and infrastructure.

The platform scales linearly with your data training and time requirements, allowing you to focus on results.

Hybrid Solution Boosts LLM Training Efficiency

Phison’s aiDAPTIV+ is a hybrid software / hardware solution for today’s biggest challenges in LLM training.

A single local workstation PC from one of our partners provides a cost-effective approach to LLM training, up to Llama 70b.

Scale-out

aiDAPTIV+ allows businesses to scale-out nodes to increase training data size and reduce training time.

Chart_PageRedesign.svg

Unlock Large Model Training

Until aiDAPTIV+, small and medium-sized businesses have been limited to small, imprecise training models with the ability to scale beyond Llama-2 7b.

Phison’s aiDAPTIV+ solution enables significantly larger training models, giving you the opportunity to run workloads previously reserved for data-centers.

Learn more Download brochure
Images Hover Images Hover Images Hover Images Hover
View details

BENEFITS

  • Transparent drop-in
  • No need to change your AI Application
  • Reuse existing HW or add nodes

aiDAPTIV+ MIDDLEWARE

  • Slice model, assign to each GPU
  • Hold pending slices on aiDAPTIVCache
  • Swap pending slices w/ finished slices on GPU

SYSTEM INTEGRATORS

  • Access to ai100E SSD
  • Middleware library license
  • Full Phison support to bring up
Images Hover Images Hover Images Hover Images Hover
View details

SEAMLESS INTEGRATION

  • Optimized middleware to extends GPU memory capacity
  • 2x 2TB aiDAPTIVCache to support 70B model
  • Low latency

HIGH ENDURANCE

  • Industry-leading 100 DWPD with 5 year warranty
  • SLC NAND with advanced NAND correction algorithm
Gantry 5