Objectifying safety and security performance with the support of the GR100 autonomous surveillance robot

Performance sécurité sureté

Key takeaways

Objectifying safety/security performance means establishing criteria and field evidence to make comparable decisions about the resources and service providers to be integrated: we no longer judge solely on the basis of the resources deployed, but also on the observed effectiveness (coverage, regularity, time taken to resolve issues, ability to maintain performance over time). In this approach, the GR100 autonomous surveillance robot fits naturally as a tool for alerting, standardising and tracking safety/security actions, providing usable data and ensuring regular patrols, which limits performance drift over time and facilitates more objective decision-making.

Security is often assessed in terms of resources: staff, equipment, budgets. But the key question remains actual performance: are we measuring effectiveness, or just compliance? As professionals in the sector point out, compliance does not always mean effectiveness; a system may be appropriate for one site but inadequate for another.

In large industrial groups, many security managers have noticed a recurring imbalance: the company formalises requirements for resources (processes, presence, equipment) rather than performance requirements (detection, response, risk control). As a result, comparisons between solutions and service providers easily shift towards price, driven by a purchasing logic that pushes costs down — whereas, as Grégoire Laparade, security director at the Bolloré Group, puts it, “You can’t reduce security to a cost.”

This point is important: security is not a cost centre in the strict sense of the term. It represents savings on the cost of unforeseen problems (incidents, downtime, losses, human, legal and reputational impacts). To demonstrate this, criteria, evidence and KPIs are required. This is where an autonomous surveillance robot such as the GR100 comes in handy, not only as a technological innovation, but also as a tool for measurement, repetition and traceability, which can be directly used to manage performance.

1. Objectives and means: frequent confusion

According to Jean-Pierre Vuillerme, former security director of the Michelin Group, there is often confusion between objectives and means when it comes to safety/security. In fact, for this to work, three checks must structure the assessment:

  • Is the target set reasonable in view of the challenges (risks, criticality, impacts)?
  • Are the resources mobilised consistent with this objective (organisation, skills, technologies, procedures)?
  • Does the system drift over time (deviations, circumventions, fatigue, turnover, deprioritisation, decreased vigilance)?

It is a simple grid, but it changes the discussion: we no longer judge a service on its presence, but on its ability to achieve a realistic objective and to last over time. The GR100 fits naturally into this framework: it helps to verify what is actually being done and whether it is sustainable over time, thanks to repeatable autonomous rounds and on-site data collection.

2. Compliance vs. effectiveness: clarifying what we are actually measuring

Compliance verifies that resources exist and that procedures are followed. Effectiveness answers another question: in a specific scenario, does the system detect, qualify and trigger an appropriate response within the expected time frame?

There are several practical tools that can be used: audits, tests, human performance and repetition. In other words, it is not a checklist that proves performance, but observable and reproducible results.

In this logic, the advantage of an autonomous system is not to add another brick to the wall, but to produce stable and verifiable execution. Take, for example, the GR100 patrol robot added to the existing set of processes and systems. It enables: 

  • Regular patrols, carried out according to a defined plan (routes, frequencies, periods, etc.)
  • Measurements and checks of areas and points of interest that can be tracked and objectified.
  • Alerts, detections, and clarifications are handled according to a defined workflow (depending on the organisation and configuration).
  • Reliable and usable data and operational statistics (logs, events, continuous improvement, legal, ROI, etc.)
TABLE
Traditional safety systems and the GR100 Patrol Robot

3. Scenarios, tests, indicators: a short and defensible loop

To objectify performance, measurement should be organised around a simple loop:

1. Define contextualised scenarios

Whether it’s perimeter intrusion, fire outbreak or damage, the important thing is to choose realistic scenarios in order to prioritise needs and identify security flaws.

For example, in industry, we often find:
– Perimeter intrusion (opportunistic or planned)
– Unauthorised access to a critical area
– Circumvention of procedures
– Tracking, sabotage, damage

2. Testing beyond the audit

Based on the experience of many safety and security managers, there is a real interest in implementing approaches based on field audits, intrusion tests, mystery shoppers and real-life scenarios, in order to measure actual performance and its ability to evolve over time.

In a mixed system (human + tools), these tests measure:

  • detection
  • qualification (removal of doubt)
  • the reaction
  • the ability to endure over time

3. Measure using efficiency-oriented indicators (KPIs)

A few indicators are sufficient, provided they are stable and comparable:

  • Coverage of critical areas (actual vs planned)
  • Time taken to resolve doubts (signal → qualification)
  • Reaction time (assessment → action)
  • Detection quality (qualified alerts vs false positives)
  • Availability of critical resources (human and systems)
  • Test scores on periodic scenarios (comparable results)

4. Continuous improvement

Repetition is key: a system is credible when it holds up over time, despite changes in context, team and workload. The GR100 facilitates this step because it structures repeatable rounds and evidence (execution, observations, data collected), which reduces the amount of declarative information and makes the discussion more factual.

4. Expected quality vs. cost logic: putting performance back at the centre

In the field, security suffers from a structural problem: variability. Patrols are not strictly comparable, routes and schedules often change, operational constraints deprioritise certain actions, not to mention fatigue, staff turnover, overload, multitasking, etc.

Result: the discussion between the security department quickly falls back on resources — and, on the purchasing side, on price — due to a lack of consistent and repeatable evidence.

In reality, security professionals expect a high-quality service: the ability to detect, respond and reduce exposure to risk — not just patrol and tick boxes. However, as long as the company does not formalise performance requirements, the comparison becomes asymmetrical: price takes precedence, and the real value of the system is difficult to defend.

This is precisely where security managers need to have all the cards in their hand. Not to challenge budget constraints, but to demonstrate, with supporting evidence, to the purchasing department that one solution or service provider is a more strategic choice than another.

An autonomous outdoor robot such as the GR100 helps to make this shift, because it enables discussion:

  • of actual coverage (and not just “planned rounds”),
  • consistency over time (and not just “organisation on paper”),
  • of observed delays (removal of doubt/reaction),
  • and actionable data for governance and continuous improvement.

5. The GR100 autonomous surveillance robot, a driver of performance

In a mature safety strategy, the GR100 safety robot is a standardisation and verification tool that is useful for managing performance.

  • Repetition and regularity

The robot performs repeatable programmed patrols (frequency, routes, checkpoints). This stabilises the indicators and facilitates comparison.

  • Operational traceability

The ability to document factually what has been done, when, where, and with what result transforms a discussion of opinion into a discussion of management.

  • Contribution to doubt resolution and response workflow

Depending on the configuration and procedures, the GR100 can help to resolve doubts and trigger an escalation (security post, agent, on-call service). This makes it possible to measure key performance indicators: qualification times, response times, alert quality.

This positioning is in line with the initial objective: to objectify safety in order to make more informed decisions and use it as a lever for governance.

As Jean-Pierre Vuilerme, former Safety Director at Michelin Group, and Eric Balastre, former Prevention and Protection Director at Renault Group, say, safety is often seen as a cost centre. In reality, it saves on the cost of unforeseen problems: incidents, production stoppages, losses, damage to reputation, theft, vandalism, human and legal impacts, etc.

This is precisely why security managers must have all the cards in hand to prove performance, demonstrate the value of a system, and show that a service provider or solution is a strategic choice.

When criteria are objectified, arbitration between the security department and the purchasing department is no longer based solely on price, but also on the quality of the service, the tool, etc. As long as security is assessed through the prism of resources, it remains vulnerable to budgetary trade-offs and price comparisons. When it is assessed on the basis of performance requirements — scenarios, tests, rehearsals, field indicators — it becomes manageable, defensible and aligned with the site’s challenges.

Sources: Agora News Security – “Security: how to objectify performance for better decision-making”; CDSE – “New forms of security management organisation”

Alix OUDIN
Alix Oudin

CMO at Running Brains Robotics

Share the post