RA100from €X,XXX
GN100from €3,999
Acer
AI Workstations

Run LLMs locally.
No APIs. Full control.

For developers, AI builders & startup teams.

RA100
from €X,XXX
GN100
from €3,999
Official Acer
3yr On-Site Warranty
EU Shipping
⚡ Ships in EU — limited availability🔧 Used for local inference, agents & automation
Acer Veriton RA100
RA100
60 TFLOPS · AMD Ryzen AI Max+
Acer Veriton GN100
GN100
1 PETAFLOP · NVIDIA GB10
Run LLMs LocallyNo API CostsFull Data ControlNVIDIA GB10 Superchip1 PETAFLOP AI128GB Unified MemoryAMD Ryzen AI Max+Official Acer ResellerEU Shipping3-Year On-Site WarrantyRun LLMs LocallyNo API CostsFull Data ControlNVIDIA GB10 Superchip1 PETAFLOP AI128GB Unified MemoryAMD Ryzen AI Max+Official Acer ResellerEU Shipping3-Year On-Site Warranty

The problem

Cloud inference gets expensive at scale.

Pay per token — cost grows with usage
Every inference call is a line item on your bill
Network latency blocks real-time use
Round-trip overhead kills production performance
No control over model or runtime
Vendor changes, rate limits, and outages are out of your hands
Data leaves your infrastructure
Every query sent externally — compliance risk by default
api-invoice-2024.txt
Jan2.1M tokens€168.00
Apr5.4M tokens€432.00
Aug9.8M tokens€784.00
Dec14.2M tokens€1,136.00
Annual total€6,240.00
↑ And that's before you scale.
Local AI equivalent€0.00 / query

Your stack

Run your AI stack locally.

LLM inference (7B → 70B+)
Agent orchestration (multi-process)
Fine-tuning & embeddings
Batch inference jobs

Full stack. On-device.

Compatible with

PyTorch
PyTorch
Deep learning
Ollama
Ollama
Local LLM runtime
Docker
Docker
Containerized AI
Jupyter
Jupyter
Interactive notebooks
CUDA
CUDA
GPU accel. (GN100)
OpenClaw
Multi-agent (GN100)

Choose your hardware

Two machines. One mission.

AMD Ryzen AI Max+ 395 · 60 TFLOPS GPU · 50 TOPS NPU
Up to 128GB LPDDR5X · Wi-Fi 7 · Windows 11 Copilot+
Best for: ≤30B models · dev · prototyping
Acer Veriton
RA100
Recommended for ≤30B models
Most Popular
from €X,XXX
Acer Veriton RA100
Run LLMs locally (7B → 70B+)
Up to 128GB LPDDR5X memory
AMD Ryzen AI Max+ 395 processor
60 TFLOPS GPU · 50 NPU TOPS
Wi-Fi 7 · Bluetooth 5.4
Windows 11 Pro Copilot+
→ Recommended if you run ≤30B models
→ Best starting point for 90% of AI developers
Start with RA100

GN100 vs Mac Studio

Mac or dedicated AI workstation?

Both run AI. Only one is built for it.

Acer Veriton
GN100
AI-First
Acer Veriton GN100
Compute
1 PETAFLOP
Memory
128GB
AI Stack
NVIDIA CUDA
OS
DGX OS
Full CUDA / NVIDIA ecosystem
Optimized for LLM inference
Run 70B → 200B+ models locally
Link 2× units for 256GB memory
Order GN100
Apple
Mac Studio
General
Apple Mac Studio
Compute
~14 TFLOPS
Memory
≤192GB
AI Stack
Metal
OS
macOS
No CUDA — Metal only
Limited LLM scaling
No continuous inference
No multi-unit option

Real-world use

What you can build.

Run LLaMA / Mistral / Qwen locally
→ Full inference, zero API cost, full data control
Multi-agent orchestration
→ Autonomous AI workflows, no cloud bottleneck
Scheduled inference jobs
→ Cron / queue workers, zero per-run cost
Private AI assistants
→ Data never leaves — full GDPR by design
Fine-tuning & embeddings
→ Train on your own datasets, no upload required
Process sensitive data offline
→ Full compliance, zero data exposure risk

Model compatibility

What runs on this.

Real models. Real inference. On your desk.

Model
Runs on
VRAM / RAM
Use case
LLaMA 3 70B
GN100
~80–120 GB
Production AI apps & agents
Mistral 7B
RA100
~8–16 GB
Dev, prototyping, chatbots
Qwen 72B
GN100
~90–128 GB
Enterprise LLM workloads
Phi-3 Mini
RA100
~4–8 GB
Embedded AI, edge inference
LLaMA 3.1 405B
2× GN100
~400 GB+
Research, frontier AI
Your fine-tune
RA100 / GN100
Varies
Your proprietary model

Performance tiers

Pick your tier.

RA100
Efficient
AMD Ryzen AI Max+ 395 · 60 TFLOPS · 50 TOPS NPU
Acer Veriton RA100
Dev environments & prototyping
Medium LLMs (7B–70B+)
Best cost / performance ratio
Windows 11 Copilot+ ecosystem
AMD Radeon AI software stack
128GB LPDDR5X · Wi-Fi 7
GN100
Maximum
NVIDIA GB10 Grace Blackwell · 1 PETAFLOP · 128GB unified
Acer Veriton GN100
Large LLMs (70B → 200B+ locally)
Heavy inference & fine-tuning
Near server-grade on your desk
Full NVIDIA CUDA ecosystem
Link 2× units → 256GB memory
NVIDIA DGX OS — AI-native stack

Economics

Own your compute.

€0.00
per query after hardware
Infinite inference, zero marginal cost
queries per day
No rate limits. No throttling. Ever.
100%
data stays on-device
Full GDPR compliance by design
Cloud AI
Cost modelPay per token
ScalingCost grows with usage
Lock-inVendor dependency
DataLeaves your environment
LatencyNetwork-dependent
Local AI (GN100 / RA100)
Cost modelOne-time hardware
ScalingZero marginal cost
Lock-inFull ownership
DataNever leaves device
LatencySub-millisecond, local
€3,999 once vs €6,000+/year in API costs. Break-even depends on workload — heavy usage typically offsets cost quickly.

Ready to own your compute

Choose your AI infrastructure.

Choose your setup and start running models locally today.

Official Acer · Full warranty · EU shipping · 3-year on-site support

Veriton RA100
RA100
from €X,XXX
Veriton GN100
GN100
from €3,999

Official Acer reseller · 3-year on-site warranty · Free EU shipping