Euroroute Cloud ACS
EU AI Act: Implications for ISPs
The EU AI Act is now in force. Its prohibited-use provisions applied from February 2025, and the requirements for high-risk AI systems — including conformity assessment, technical documentation, and registration obligations — apply from August 2026. For most Irish ISPs, this has not yet prompted much internal discussion. However, it is worth understanding where the Act’s scope does and does not reach into the tools ISPs commonly use.
The short version is that most network management and CPE provisioning platforms used by Irish ISPs are unlikely to fall into the high-risk category as defined by the Act. However, the Act’s broader transparency and accountability requirements touch on AI systems regardless of risk classification, and they apply to third-party suppliers as well as to the ISP deploying the tool. That distinction matters.
How the AI Act Classifies Risk
The AI Act uses a tiered risk classification. At the top are prohibited AI practices — things that cannot be done at all, such as real-time remote biometric surveillance in public spaces. Below that are high-risk AI systems, which are subject to the most stringent requirements. These include AI used in critical infrastructure, employment and workforce management in ways that significantly affect access to services, and AI that profiles individuals to assess creditworthiness or grant access to essential services.
Below high-risk are limited-risk systems — primarily chatbots and synthetic media — which must disclose to users that they are interacting with AI. And below that are minimal-risk systems, which include most AI applications in commercial use today, for which there are no mandatory obligations beyond general compliance with EU law.
Most ISP tools fall into the minimal-risk category. Automated network diagnostics, CPE performance monitoring, remote provisioning, and fault prediction algorithms are not making decisions that directly determine whether individuals can access essential services in a way that triggers the high-risk classification. Predictive churn models that flag accounts for a retention call are similarly unlikely to meet the high-risk threshold, provided the ISP retains human discretion in how those flags are acted upon.
Where ISPs Do Need to Pay Attention
Even for minimal-risk AI, the Act establishes a baseline expectation of responsible use. The European AI Office, which oversees enforcement at EU level, has made clear that the Act’s underlying principles — transparency, human oversight, and accountability for AI outputs — apply across all risk categories as a matter of good practice, and may inform how national supervisory authorities approach complaints and investigations.
The more immediate concern for ISPs is AI used in customer-facing interactions. If you deploy an AI-powered chatbot or virtual assistant for first-line customer support, the Act requires you to ensure that users know they are talking to an AI system and not a human agent. This is the limited-risk disclosure obligation, and it is already in force.
There is also an emerging question around AI tools used for credit or payment risk assessment. If an ISP uses automated scoring to decide whether to approve a subscriber for a particular service package or payment plan, and that scoring model draws on behavioural or financial data, it may come close to the high-risk boundary. The Act’s definition here is still being interpreted in national guidance, but it is worth reviewing with legal counsel if you operate in this space.
Supplier Accountability
The AI Act places obligations on both developers and deployers of AI systems. If your network management platform or Cloud ACS provider has embedded AI functionality — anomaly detection, predictive maintenance, automated remediation — you as the deploying ISP share responsibility for ensuring that use of those features is compliant.
In practice, this means asking your key technology suppliers whether they have assessed their products against the AI Act, what risk classification they have applied, and whether they have prepared the technical documentation and instructions for use that the Act requires for higher-risk systems. Reputable vendors operating in the EU market will have this documentation available. If they do not, that is a useful data point about their readiness.
It is also worth noting that the Act’s general-purpose AI provisions — which cover large-scale foundation models used across multiple applications — are already in force. If your customer support or back-office tools use one of the major AI platforms as an underlying model, the provider of that platform has obligations around transparency and capability disclosure that you may want to understand as a customer.
A Proportionate Response
For most Irish ISPs, the AI Act does not require immediate structural change. The practical steps are relatively contained: audit the AI-enabled tools currently in use across the business, confirm their risk classification with vendors, check that any customer-facing AI systems have the required disclosure mechanisms in place, and flag anything that touches credit or essential service access decisions for more detailed review.
The August 2026 compliance date for high-risk systems is the near-term milestone to work toward. Ireland’s national supervisory authority is still being established, but the regulatory direction is clear enough to act on now without waiting for enforcement guidance to mature.