Standardize OEE using ISO 22400 and wire it to Opcenter and Insights Hub so teams act on the same facts every shift.
From Shifting Numbers to Stable Metrics
OEE is one number that hides three different truths: availability, performance, and quality. Teams like it because it is easy to read. They struggle with it because each site defines time buckets and losses differently. When the inputs shift, the number moves for the wrong reasons. ISO 22400 fixes that by defining a standard time model and the elements that roll up into OEE and related indicators for manufacturing operations management (International Organization for Standardization, 2014). If you ground your calculation in this model and wire it to your execution and IoT stack, the number becomes stable enough to guide daily action rather than debate.
Start with one calendar and one time model. ISO 22400 expects a clear definition of planned time, operating time, and loss categories. That means shift calendars with breaks, changeovers, and maintenance windows are explicit, and machine states map to a standard set that everyone uses. In Opcenter Execution, you already track operations, results, and holds with timestamps and identifiers. Insights Hub collects signals and events from machines. Opcenter Intelligence layers both into a clean model across lines and sites. Together, they produce availability, performance, and quality with consistent definitions that match the standard (Siemens Digital Industries Software, n.d.-a; Siemens Digital Industries Software, n.d.-b; International Organization for Standardization, 2014).
Then define losses with names operators recognize. The standard does not force a single taxonomy for every plant. It defines the math and the time buckets. On the floor, your operators need names that make sense: setup, cleaning, minor stop, speed loss, scrap, and rework. Map those names to the ISO 22400 elements and keep the mapping visible. This way, the number is not a mystery. People see which loss increased and where to act. A good practice is to keep the first board small: yesterday, last seven days, and this shift, with trend arrows and a short comment field next to the top loss (Kang et al., 2016; Hester et al., 2017).
Three pains usually block useful OEE.
- Inconsistent definitions across sites. Without a shared time model, one site counts changeover as planned downtime and another counts it as a loss, so OEE cannot be compared or summed. ISO 22400 provides the structure to end this debate (International Organization for Standardization, 2014).
- Weak data lineage. Calculations done in spreadsheets drift. When a value is wrong, no one can trace it. Opcenter keeps the execution truth and identifiers. Insights Hub brings in machine runtime and counts. Opcenter Intelligence maintains calculated indicators with lineage and history (Siemens Digital Industries Software, n.d.-a; Siemens Digital Industries Software, n.d.-b).
- Dashboards without decisions. Many boards are pretty but noisy. A KPI hierarchy keeps OEE in context with throughput, cost, and quality so teams connect the number to actions that move flow and margin (Kang et al., 2016; Hester et al., 2017).
Ask This → Get That: loop to stand up a defensible OEE in two weeks.
- Ask: What is the official shift calendar and planned time.
Get: a shared calendar in Opcenter that defines shifts, breaks, and planned stops. Insights Hub imports the same calendar so machine states align. This anchors availability in the ISO 22400 time model (International Organization for Standardization, 2014; Siemens Digital Industries Software, n.d.-c).
- Ask: Which machine states and losses do operators use today.
Get: a state map that collapses local terms into ISO 22400 categories and a small loss list that covers 80 percent of events. Configure state capture on HMIs and machine connectors so operators label events in the moment, not after the shift (International Organization for Standardization, 2014; Siemens Digital Industries Software, n.d.-a).
- Ask: How do we validate the math.
Get: a sample set of orders with known counts and times. Recalculate availability, performance, and quality using ISO 22400 formulas in Opcenter Intelligence and match against the sample. Fix time zone, rounding, or duplicate event issues before going live (International Organization for Standardization, 2014; Siemens Digital Industries Software, n.d.-b).
- Ask: What events should trigger action.
Get: three alerts: rising minor stops on the bottleneck, speed loss beyond a threshold, and repeat scrap on a product family. Alerts land in the daily meeting and on a small board near dispatch. This keeps focus on the dominant loss, not on the number itself (Kang et al., 2016; Hester et al., 2017).
- Ask: How will we share wins across sites.
Get: a simple template for a one page case: baseline, change, result, and photo. Lighthouse research shows sustained improvement when teams standardize data and keep feedback loops short, which this routine supports (World Economic Forum, 2025).
Wire identifiers first so records travel. Use stable IDs for machines, lines, orders, products, and lots. The same ID should appear in Opcenter, Insights Hub, and the calculation layer. This is basic, but it is why you can drill from a weekly OEE to the exact shift, order, and step that hurt performance. It also allows you to pool results across lines without double counting (Siemens Digital Industries Software, n.d.-a; Siemens Digital Industries Software, n.d.-b).
Keep the design accessible so everyone can read it. On the board, label lines directly, add data labels to the main chart, and include alt text such as, “Stacked bar of availability, performance, and quality for last seven days.” Use color plus labels, not color alone, and meet contrast guidance so the same information reaches all readers (World Wide Web Consortium, 2024).
Prove the value with a thin slice on the main constraint. Choose one line and one family. Run one week with the ISO 22400 model and the three alerts. Track OEE and also track the loss that alerts the most. A common early win is cutting minor stops through simple fixes like tool staging and a better changeover checklist. The point is not the number. It is the habit of using one model, one board, and one small weekly change that frees capacity without capital (Kang et al., 2016; World Economic Forum, 2025).
Finally, connect OEE to planning and release. When losses stabilize and the model is trusted, planners can use actual speed and uptime by product family to set realistic rates and due dates in APS. This closes the loop from performance to plan and back. The result is fewer expedites and a schedule that holds because it reflects what the line can actually do, not what the datasheet says (Siemens Digital Industries Software, n.d.-a; Siemens Digital Industries Software, n.d.-b).
Mini FAQ
References
- Hester, P., Prenatt, J., & Jung, B. (2017). A method for key performance indicator assessment in manufacturing organizations. National Institute of Standards and Technology. https://www.nist.gov/publications/method-key-performance-indicator-assessment-manufacturing-organizations
This paper is relevant because it explains how to select and validate KPIs with stakeholder input and a clear mathematical basis. It covers value focused thinking, weighting, and assessment steps that make KPI programs more durable. Two takeaways are that stakeholder alignment reduces metric churn and that documented methods improve auditability.
- International Organization for Standardization. (2014). ISO 22400-2:2014 Automation systems and integration — Key performance indicators for manufacturing operations management — Part 2: Definitions and descriptions. https://www.iso.org/standard/54497.html
This standard is relevant because it defines OEE and related indicators in a consistent time model for manufacturing operations. It covers inputs, formulas, units, and behavior over time. Two takeaways are that shared definitions end cross site debates and that stable formulas make trends meaningful.
- International Organization for Standardization. (n.d.). ISO/DIS 22400-2 Automation systems and integration — Key performance indicators for manufacturing operations management — Part 2 [Draft in development]. https://www.iso.org/standard/87563.html
This draft is relevant because it signals ongoing maintenance of ISO 22400 and clarifies elements for modern operations. It covers scope notes and status for the next revision. Two takeaways are that the model continues to evolve and that teams should align to current language when available.
- Kang, N., Zhao, C., Li, J., & Horst, J. (2016). A hierarchical structure of key performance indicators for operation management and continuous improvement. National Institute of Standards and Technology. https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=919754
This paper is relevant because it shows how to place OEE inside a KPI hierarchy so it supports larger goals. It covers basic and comprehensive KPIs and their relationships. Two takeaways are that context prevents single number obsession and that causal links guide better projects.
- Siemens Digital Industries Software. (n.d.-a). Opcenter Execution. https://plm.sw.siemens.com/en-US/opcenter/execution/
This page is relevant because Opcenter Execution captures operations, results, holds, and context that feed OEE. It covers production tracking, traceability, and enforcement that make inputs reliable. Two takeaways are that execution is the record of truth and that signatures and genealogy support investigations.
- Siemens Digital Industries Software. (n.d.-b). Opcenter Intelligence. https://plm.sw.siemens.com/en-US/opcenter/manufacturing-intelligence/
This page is relevant because Opcenter Intelligence calculates and visualizes manufacturing KPIs across sites. It covers global visibility, near real time analytics, and data sources. Two takeaways are that centralized logic keeps formulas consistent and that cross site views expose repeatable wins.
- Siemens Digital Industries Software. (n.d.-c). Insights Hub. https://plm.sw.siemens.com/en-US/insights-hub/
This page is relevant because Insights Hub collects IIoT data and contextualizes it for analytics and action. It covers device connection, visualization, and scalable applications. Two takeaways are that near real time signals make availability visible and that standard connectors reduce custom work.
- World Economic Forum. (2025). Global Lighthouse Network 2025: The mindset shifts driving impact and scale. https://reports.weforum.org/docs/WEF_Global_Lighthouse_Network_2025.pdf
This report is relevant because it links standardized data and short feedback loops to sustained performance. It covers quantified outcomes and operating models from leading sites. Two takeaways are that common definitions enable replication and that daily learning cycles sustain gains.
- World Wide Web Consortium. (2024). Web Content Accessibility Guidelines 2.2. https://www.w3.org/TR/WCAG22/
This guideline is relevant because KPI boards must be readable by everyone. It covers text alternatives, contrast, and structure for accessible content. Two takeaways are that alt text and labels support assistive tech and that color plus labels avoids ambiguity.
- World Wide Web Consortium. (2023). What is new in WCAG 2.2. https://www.w3.org/WAI/standards-guidelines/wcag/new-in-22/
This page is relevant because it summarizes changes from WCAG 2.1 to 2.2 that affect dashboard design. It covers new success criteria and notes on parsing and navigation. Two takeaways are that new criteria improve navigation and that predictable behavior helps all users.