An on-premise AI data center for a steel plant is not a server room. It is a sovereign GPU rack landed inside a Purdue-segmented mill network, sealed against red dust and mill scale, holding line-side at L3 with conduits down to L1 / L2 PLCs (read-only) and up to L4 enterprise IT (results only) — without ever crossing the cyber zone boundary the wrong way. iFactory designs the data center to mill-floor reality: NEMA 12 / IP55 dust-tight cabinets, 35–45 °C ambient tolerance, ISA / IEC 62443 zones & conduits, IEC 61850 substation hooks, NVIDIA GB300 for training, H200 for fine-tune and live inference, Jetson at the line for vision. Power and a network drop are the only things you provide. One-time CapEx — you own the rack, the GPUs, the firewall layer, the audit trail. Zero cloud dependency. Zero data egress out of the mill perimeter. To scope a unit, get a turnkey quote.
Upcoming iFactory AI Live Webinar:
On-Prem AI Data Center for Steel — Sovereign by Architecture
Reference architecture for landing GPU compute on a Purdue-segmented steel mill network. NEMA 12 / IP55 enclosures, ISA / IEC 62443 zones & conduits, GB300 + H200 + Jetson edge. Survives 35–45 °C ambient, red dust, EAF radiant heat. Zero cloud dependency. Zero data egress. You own the rack the day it goes live.
Why On-Prem — Not Cloud, Not Hybrid
Steel plants are critical infrastructure with availability requirements measured in minutes per year and cyber posture measured against IEC 62443 zones. Cloud AI fails on three of the four dimensions that matter to a plant CIO — latency, sovereignty, and audit. On-prem is the architectural answer, not a preference.
EAF off-gas vision, BOF flame analysis, hot-strip mill defect detection — all need round-trip inference under 100ms. Cloud round-trips routinely exceed 250ms before you count the WAN.
PLC tags, batch records, recipe, alloy chemistry, customer order data — all stay inside the mill perimeter. No vendor cloud sync. No model registry call-out. Compliant with national / sectoral data residency requirements.
Auditor wants to see the conduit between L2 SCADA and the AI rack. With on-prem, you point at one firewall pair. With cloud, you have a five-page diagram of internet hops.
Mill availability cannot depend on a regional cloud incident or an internet outage. On-prem keeps the AI loop running through every external failure mode.
Steel-Specific Reality · The Mill Floor Is Hostile
A steel mill is not a server room. Red iron-oxide dust, mill scale, slag particulates, EAF radiant heat, harmonic-rich electrical environments, and conductive metallic dust make every off-the-shelf data center assumption wrong. The architecture has to start from the floor conditions, not the spec sheet.
Reference Architecture · Purdue-Aligned
The on-prem AI data center fits cleanly into the Purdue Enterprise Reference Architecture. The rack lands at Level 3 (site operations), with read-only conduits down to L1 / L2 and results-only conduits up to L4. No new shortcut from L1 sensor to L4 dashboard — every layer is honored.
The architectural rule: data flows up freely (with rate limits), commands flow down only by exception. The AI never writes a setpoint into L1 without a human-approved scenario routed through the studio — the same pattern as the scenario studio's approval gates.
The Three GPU Tiers · Where Each Lives
Three NVIDIA platforms, three jobs, three locations on the Purdue stack. The architecture deliberately spreads compute across L1 (vision edge), L3 (training and live inference), and back-of-house batch — rather than centralizing everything in one rack.
NVIDIA Jetson nodes installed in IP66 stainless enclosures next to EAF, BOF, hot-strip mill stands, casters. Sub-30ms vision inference for surface defect, slag detection, rolling-mill flatness. No internet path; results stream up to L3.
NVIDIA H200 holds the live process twins (BOF heat, EAF energy, mill thermal) in HBM3e memory. Real-time inference every 30–60 sec. Periodic LoRA fine-tunes against historian backfill. Sized for the mill's tag count and twin scope.
NVIDIA GB300 handles training jobs, what-if scenario batches, CFD-coupled simulations, plant-wide twin compose. Runs as batch — idle between training campaigns, saturated during recipe development or commissioning.
The Conduits · What Talks to What
Every cross-layer flow is a named conduit with an explicit firewall rule, protocol allowlist, and audit log. The diagram below is the conduit set for a typical integrated steel mill deployment. Each line is a firewall rule, not an aspiration.
The Physical Rack · What Ships
The rack ships pre-configured. iFactory engineers commission on-site, but the build, wiring, GPU population, and firewall provisioning all happen in our integration lab. Plant team uncrates, lands, connects power and network drop. Day-one ready.
Integrated Mill vs Mini-Mill · Two Reference Footprints
Integrated mills (BF-BOF route) and mini-mills (EAF route) have different production scales, different telemetry density, and different risk profiles. The reference architecture adapts — same Purdue alignment, different sizing.
- 3–5 tied L3 racks (one per major area)
- 15–25 Jetson edge nodes
- 50,000+ live PI tags
- 240–480 TB storage
- BF stove + BOF heat + caster + hot strip + cold mill twins
- Typical commission: 14–18 weeks
- 1–2 L3 racks (single control room)
- 6–12 Jetson edge nodes
- 15,000–25,000 live PI tags
- 120 TB storage
- EAF energy + LMF chemistry + caster + section / bar mill twins
- Typical commission: 8–12 weeks
What Stays In, What Stays Out
The architectural promise of on-prem is data residency. Here is what that means concretely — for the auditor, for the regulator, for the customer asking about your AI supply chain.
- PLC tag history & OPC UA streams
- Recipe, alloy chemistry, heat genealogy
- Customer order data, due dates
- Vision frames from EAF / mill stands
- Trained model weights (LoRA & full)
- Scenario simulation traces
- Operator action / approval logs
- Aggregated KPIs to corporate (rolled up)
- Anonymized model health telemetry to iFactory support (opt-in)
- Software updates inbound (signed, IDMZ-staged)
- Remote support sessions (operator-initiated, time-limited)
- Nothing else. No raw data, no PI streams, no model weights.
Why iFactory
Most "industrial AI" pitches drop a server in your office network and call it a deployment. iFactory ships a Purdue-aligned, IEC 62443-zoned, mill-floor-rated reference architecture — rack, GPUs, firewall, storage, edge nodes — built for the way steel plants actually run. Schedule a working session.
Rack lands at L3 with named conduits. Auditor sees zones & conduits, not a flat IT diagram. ISA / IEC 62443 alignment is built into the firewall ruleset.
NEMA 12 / IP55 sealed cabinet, 35–45 °C ambient tolerance, in-rack closed-loop cooling, vibration-damped, transient-protected. Survives where IT racks fail.
Jetson at L1 for vision, H200 at L3 for live inference, GB300 at L3 for training & scenarios. Right tool for each job, no over-spec, no under-spec.
Every byte stays inside the mill perimeter. No vendor cloud sync. No model registry call-out. Compliant with national / sectoral data residency.
Rack ships fully cabled, GPU-populated, firewall-provisioned. Plant uncrates, lands, connects power + network. Days, not months, to first inference.
One-time CapEx. You own the rack, the GPUs, the firewall, the model weights. Talk to support.
Power + Network. We Handle the Rest.
Power — 3-phase 32 A circuit at the L3 control room. Dedicated transformer feed preferred; conditioned existing feed acceptable. Network drop — Gigabit uplink with read-only access to historian, OPC UA aggregator, and DCS / MES.
NEMA 12 rack, GB300, H200 nodes, Jetson edge units, OT-IT firewall pair, SSD NAS, online UPS, switches, cabling. Pre-tested in lab. On-site commission: rack landing, network bridging, firewall provisioning, model bring-up, first inference, twin commissioning, training across IT / OT / Ops / Validation.
8–18 Week Deployment
Mini-mills deploy faster (8–12 weeks) because the asset count and tag density are lower. Integrated mills run 14–18 weeks because of multi-area racks, more Jetson edge points, and longer model training campaigns.
Mill walk, dust / temp / vibration measurements, PI tag inventory, Purdue zone audit. Fixed-price BOM in 5 business days after walk.
Rack populated in iFactory lab. Firewall provisioned. GPUs benchmarked. Twin models pre-trained on historian backfill. Pre-shipment FAT.
Engineers fly in. Rack landed, network bridged, firewall connected to plant DMZ. Twin runs in shadow alongside operators.
Live OPC UA flow, dashboards in IDMZ, edge inference at the line. Year-one support active.
FAQ
Latency and zone hygiene. The IT data center is L4. Putting AI compute there means every PLC stream has to round-trip through the IDMZ, killing your sub-100ms inference budget. The L3 rack lives where the OT data lives.
For non-GPU workloads (dashboards, historian replication), yes — your existing servers are fine and we'll integrate. For training and inference at GB300 / H200 scale, no — the thermal envelope and PCIe topology matter and we ship purpose-built nodes.
Operator-initiated, time-limited sessions through the IDMZ. iFactory engineer can join read-only for support; never persistent, never inbound by default. All sessions logged in the IEC 62443 audit trail.
Fixed price per mill, scoped to integrated vs mini-mill, asset count, Jetson edge count, twin scope. No per-tag billing, no per-inference fee. Includes hardware, firewall, twin training, deployment, training, year-one support. Get a quote — proposal in 5 days.
Join the Webinar. Or Get a Quote on Your Mill.
Watch the on-prem AI rack land into a Purdue-segmented mill on May 13. Or send your mill class (integrated / mini-mill), asset list, and PI tag inventory — we come back with a fixed-price BOM in 5 business days. Rack, GPUs (Jetson + H200 + GB300), firewall pair, storage, UPS, on-site commission, and year-one support all included. You own the rack outright the day it goes live. Zero cloud dependency, zero data egress.






