In March of this year, the final form of the Artificial Intelligence Regulation (AI Act) was approved at the plenary session of the European Parliament, which should enter into force in 2026. The EU thereby sets the rules for models based on artificial intelligence, especially for those it classifies as high risk. The rules determine the conditions under which artificial intelligence systems can be put on the market or put into operation and how they can be used so that they do not pose risks to the safety, health and fundamental rights of citizens. What does the new legislation specifically bring and how will it affect the business sphere? How far are companies in the automotive industry today with the use of artificial intelligence?
Working with text, creating various literary forms from poems to jokes to essays, is one of the simplest activities that artificial intelligence enables through a number of algorithmic language models. In this form and used in this way, it is undoubtedly a great help, which at first sight carries only minimal risk – unless, of course, we do not trust the answers generated by the model 100% and prefer to check them somewhere.
But it’s not quite like that. Even chatbots and generative AI carry some specific risks and their users should be informed that they are interacting with a machine and there is an obvious risk of manipulation.
It is the risk-based approach, i.e. the categorisation of AI systems into different categories based on their level of risk and the subsequent determination of the requirements for their development, deployment and use, that is the most important part of the whole European AI Regulation (AI Act).
The regulatory framework defines four levels of risk for AI systems, whereby the higher the risk posed by the use of an AI system, the stricter the rules apply to it.
The vast majority of AI systems currently in use in the EU fall into the category of minimal or no risk. These systems (think of video games with AI) can be developed and used in accordance with existing regulations and do not require any additional regulation.
The limited-risk category includes the aforementioned chatbots and generative AI. Here, people must be made aware that they are interacting with an AI, i.e. a machine. Providers must ensure that AI-generated content is labelled as artificial, including audio or video.
AI systems designated as high-risk include AI technologies used in areas such as critical infrastructure (transport, energy, etc.), education, HR, banking, law and justice, as well as some product safety components (such as robot-assisted AI applications in surgery) and others. These are mostly AI systems that can negatively affect people’s safety or restrict their fundamental rights. High-risk AI systems will be subject to stringent obligations before being placed on the market, including, but not limited to, the high quality of the data sets that feed the system, the provision of human oversight, and a number of others.
What is prohibited
The fourth category of use of AI systems includes those with unacceptable risk and are therefore outright prohibited. These include AI systems or applications that manipulate human behaviour to circumvent the free will of users, systems that enable ‘social evaluation’ by governments or companies, and certain predictive control applications.
In addition, the Regulation prohibits certain uses of biometric systems, such as emotion recognition systems used in the workplace and certain systems for categorising people or real-time remote biometric identification for law enforcement purposes in publicly accessible areas.
“The AI Act covers a wide range of artificial intelligence systems, from low-risk systems such as chatbots to high-risk systems such as facial recognition or autonomous driving systems. The Regulation prohibits AI systems that pose an unacceptable risk to human safety, health or fundamental rights.”
There are some exceptions where such techniques can be used to prevent a particular threat (terrorist act, child at risk, etc.). These exceptions are precisely defined in the legislation and are subject to authorisation by a judicial or other independent authority.
Inteligent vehicles
It might seem that the AI Act is important mainly and foremost for the companies that develop these systems, but it is not. The regulation will be just as binding on the entities that use them – regardless of who developed them. Each organisation will need to familiarise itself with the new requirements and ensure that the systems it wants to use comply with them – which may take time and money. But it will certainly be worthwhile. If AI systems are compliant with EU legislation, this will minimise the risk of litigation and penalties. Conversely, the correct use of AI systems can undoubtedly give companies a competitive advantage.
This is true for both applied AI and, in particular, embedded AI. While the EU AI Regulation does not explicitly address autonomous vehicles and their components, legislation in this area has an indirect impact on the automotive industry.
In particular, car companies will need to be transparent about how they use AI in their vehicles. This means that they will have to inform drivers and other users that they are using AI systems and how it works.
AI systems used in vehicles must be safe and reliable, which requires testing them and putting in place safeguards to prevent misuse of these systems.
Companies will need to collect and use data on the use of AI systems in a responsible way, which means obtaining consent from drivers and other users and ensuring that the data is protected from unauthorised access. (Regulation of data collection and use within the EU is generally governed primarily by the Data Act, the Data Flows Regulation.)
Finally, car companies will need to consider the ethical implications of using AI systems in their vehicles. This includes issues such as liability for accidents caused by autonomous vehicles or the risks associated with potential discrimination in object recognition.
AI. | Photo: AI
It is necessary to prepare
Political consensus on the final text of the AI Act was reached among EU representatives at the end of 2023, followed by a majority vote in the European Parliament this year and agreement among EU Member States (the Council of the EU). The AI Act was published in the Official Journal of the EU in July and entered into force twenty days later, at the beginning of August. Now companies and authorities have two years to prepare before the regulation becomes fully binding, which will happen on 2 August 2026.
More specifically, the application of the rules under the AI Act will be phased in gradually. Certain provisions relating to prohibited practices will be enforceable as early as six months after entry into force, in February next year. Then, in August, the general AI obligations will begin to apply.
The two-year implementation period applies to all the rules of the Act – with one exception concerning high-risk AI systems, which will be intended for use as a safety component of a product and will meet some additional criteria. For these systems, the Regulation will apply from 2 August 2027.
Validity of the Regulation
- On 2 February 2025, the general provisions and prohibited practices in the field of AI will enter into force
- On 2 August 2026, the Regulation will enter into full force and effect – with the exception of the classification rules for high-risk AI systems.
- On 2 August 2027, the rules for high-risk AI systems will enter into force.
It is with these systems that automotive companies should be particularly cautious. According to the European Digital Agenda Unit, which is part of the Office of the Government and falls under the competence of Deputy Prime Minister for Digitalisation Ivan Bartos (the unit monitors the activities of EU institutions in areas such as artificial intelligence, data economy, safer internet for children, cyber security and others), companies should check in particular which category the AI systems they are developing or implementing belong to.
“The automotive industry will continue to be mainly covered by sectoral legislation, which is however also linked to the AI Act through Annex I – List of harmonisation legislation of the Union. This lists the sectoral legislation which, due to its complexity, will have to be in line with the AI Act by 1 August 2027. Products included in the harmonised legislation that have an AI element as a safety component will be classified as high-risk, and the requirements will therefore also apply to them,” said Lenka Stavreva from the Cabinet of Deputy Prime Minister for Digitalisation.
Supervisory Authority
Companies that fail to comply with the rules after the legislation comes into force will be fined. Fines are expected to range from €35 million or seven per cent of a company’s worldwide annual turnover (whichever is higher) for breaches of prohibited AI applications, €15 million or three per cent for breaches of other obligations, and €7.5 million or 1.5 per cent for providing incorrect information.
Administrative fines for violations under the AI Act for small and medium-sized businesses or start-ups should be lower.
An authority will have to be set up in each EU country to deal with AI and compliance with all the rules of the AI Act. According to Lenka Stavreva, the Czech Republic has not yet decided on a supervisory authority and is working on several options. “It will be up to the political decision of the government to decide which authorities will have these competences,” said Lenka Stavreva, adding that the obligation to designate such an authority comes into effect for all member states within 12 months of the AI Act coming into force, i.e. in August next year.
It’s already started at the headquarters
Companies, of course not only in the automotive industry, are aware of the existence of the AI Act and are starting to take more interest in this issue. Although most of them are already using some AI tools, they are rather at the beginning with the analysis of future obligations arising from the regulation. “We are now in the analysis phase to determine whether organisational measures or modifications to standards will be necessary in the development of integrated AI systems, for example in the area of in-car assistance systems,” said a representative of Valeo in the Czech Republic.
At Continental Barum, they are waiting for a company-wide regulation in the area of AI. The headquarters is currently working on the Superchange Your Work with Continental’s AI Assistant project.
Similarly, at Vitesco Technologies, the issue is being addressed at the highest level in the company, where a team consisting of representatives from the legal department, engineers, production managers and other departments has been formed under the leadership of the IT department.
Bosch has the opportunity to be completely “there”: it has joined the EU High Level Working Group on Artificial Intelligence. Milan Šlachta, Bosch Group representative in the Czech Republic and Slovakia, says of EU legislation on AI: “When developing potential regulatory measures, it is important to ensure that they are consistent with other regulations and leave enough room for innovation. However, Bosch recognises that regulatory measures, if carefully applied, could help create greater confidence in AI.”
Miroslav Dvorak, CEO and co-owner of the Motor Jikov Group holding company, is somewhat sceptical of the new legislation. “If the EU already ‘has’ to regulate something, I just hope the impact on business will be as minimal as possible in this respect,” he says.
In companies, AI is already helping
We asked business representatives: where and how does your company use AI? How do you use AI systems yourself?
Valeo Czech Republic
Leoš Dvořák, Director of the Development Center: we use applied AI tools, for example, to speed up or automate office and administrative tasks. This includes, for example, speeding up the recruitment process, where AI tools help find professionals with specific skills on the social network LinkedIn. Some development teams are also experimenting with automatically generating software code based on the text input of the final function, or automatically “translating” code from one programming language to another, or optimizing existing code for faster computation or for otherwise powerful hardware. It should be emphasized that this is in the internal testing phase and is not applied to serial customer projects.
By the very nature of our products, we are dealing with integrated AI. Cameras, lidars, radars and driver assistance system controllers handle tasks such as the recognition and classification of objects on the road or the task of manoeuvre planning and route planning precisely thanks to AI.
We use AI not only in the final product itself, but also as a development tool, for example to generate training data to improve the detection capabilities of cameras, where our developers use a method called augmented data. Imagine, for example, that you need to teach a camera-based assistance system to detect a pedestrian carrying a large cardboard box who enters a poorly lit roadway. If we don’t have enough of exactly such scenes filmed by our test cars, we can use generative AI to transform footage filmed in daylight to nighttime lighting conditions and add a plausible-looking pedestrian carrying a large cardboard box to the images. Or we need to teach an autonomous emergency braking system to recognise a kangaroo on the road – this is not a joke, but a real case, as the same car model can be driven in both Europe and Australia. If we don’t have enough training data with kangaroos, we generate it with AI. Of course, this augmented data can’t completely replace real data, but it will significantly improve and speed up the process of learning models to recognize ojects on the road.
In addition to autonomous driving and driver assistance systems, AI has great potential in the thermal management of electric vehicles to optimize range and battery life.
Gábor Iffland, Director of Communications: personally, I use AI for search, to diversify sources of information or to better formulate text.
MOTOR JIKOV Group
Miroslav Dvořák, CEO: The rise of AI is enormous, and within applied AI we can definitely say that it permeates all departments. Currently, MOTOR JIKOV uses AI mainly in the area of supply chain optimization, where it is used to help forecast demand and optimize inventory. A very common use is for automating repetitive tasks from production through quality control, marketing and HR.
A separate chapter is the use of JIKOV Fostron’s single-purpose machines, which then extends more into the area of integrated AI. We also want to explore the possibilities of use in cooperation with the University of Technology and Economics in České Budějovice.
Miroslav Dvořák, CEO: I was interested in the use of AI to help with time management, scheduling meetings and recalling important events. I would like to try its use in the health and fitness sector as well, and I am intrigued by the possibility of tracking health data, based on which AI can then suggest training plans and provide tailored advice for improving health.
onsemi – ON Semiconductor Czech Republic
Aleš Cáb, Production Director: in onsemi we distinguish between general and specific uses. As far as the first area is concerned, there is a widespread use in the company for information retrieval or text completion/optimization, which works very well. In the case of information retrieval, it is a first quick information, which then needs to be handled cautiously and verified in the next step if necessary.
A specific use of AI is in manufacturing. During chip production, which takes 6 to 12 weeks, various parameters are measured and special visual checks are made after each sub-operation. At the same time, we store the sub-parameters of each process every few seconds. We generate huge volumes of data. Together with an external Czech company, we have developed a system using AI to predict whether a given chip will be within the required specification based on this data, and we can make the decision to skip the costly final measurement on the board and measure down to the parameters of the final module.
The AI module also helps in analyzing the data and determining the root cause of problems. It also helps us in setting up the maintenance system.
Josef Švejda, Managing Director of the Czech branch. Sometimes there are problems caused by speech recognition, but it is still a great time saver. Of course, it is necessary to check the outputs, you cannot rely 100% on accuracy.
Continental Barum
Libor Láznička, CEO: Many of our local projects are already working on the basis of AI, using it or are in the testing phase. Globally, Continental is running a pilot project called Superchange Your Work with Continental’s AI Assistant using a centrally controlled solution, primarily out of concern for data security. Also at Continental Barum, these tools are used under strict security rules.
Specifically at our site in Otrokovice, colleagues have developed camera systems using AI algorithms to identify the right objects or materials across different process operations and production areas. The AI-based systems that we have developed and use rather help employees and make their work easier.
The AI-based global translator is accessible to all of the Group’s nearly 200,000 employees. It is capable of translating documents, presentations, images, into 23 languages. For many employees, this is a very useful tool.
We have also been using a chat-based AI virtual assistant for recruitment for several years, automating routine processes and communication with candidates. The chatbot is available 24/7/365 at our career booths.
Libor Láznička, CEO: Last year at the end of the year, when I was writing a New Year’s greeting to employees and a recap of the past year, I tried my hand at “spamming” on this topic. I was surprised at how easy, and more importantly, how quick it was. What’s more, it was even usable, easy to read. Of course, I didn’t use a word. It felt impersonal, without work, and without reflection on the past year and the specific situations that occurred.
Vitesco Technologies
Jan Semerák, head of the industrial engineering department: We currently use AI especially in the field of machine image recognition, where conventional “vision systems” are no longer sufficient. It already saves us dozens of workers. The reliability is close to one hundred percent, but in most applications it still does not meet our needs (at the 6Sigma level). In such cases, the person always has the final say, reviewing the cases with a lower degree of confidence and deciding whether it is a defect. The advantage is that the system itself evaluates the reliability and, if necessary, passes it on to a person.
Next, we are starting to build and use a private LMS (Learning Management System), which is also connected directly to the production line. It therefore contains, for example, both work instructions, machine documentation and maintenance records, as well as real-time selected technological data from PLC and MES systems. However, we are only at the beginning of this.
When it comes to systems that intersect across the business, for example, we already have Bing AI integrated into the Office 365 environment, but our goal is to use these tools thoughtfully and effectively.
Jan Semerák, head of the industrial engineering department: I personally use AI for consultations regarding various professional issues (but I think about the information provided, the reliability is definitely not 100 percent) and possibly for the creation of images and logos.
Peter Šveda, senior LEAN manager: I use AI systems to create illustrations for presentations, create texts for workshops and trainings and to find various information
Lubomír Tuček, Country EBR & Communication: I use AI systems mainly for working with text, including translations, summaries and edits. They help me with the wording of emails, checking style and grammar. I also use AI services as a partner in discussing and developing my own ideas and thoughts. They are also of great benefit in the field of programming and data analysis, where we are limited by the established rules on sharing sensitive information.
Bosch Group v ČR
Milan Šlachta, Bosch Group representative in the Czech Republic and Slovakia: Bosch uses and creates artificial intelligence systems at the same time.
Bosch is a world leader in the development of AI technologies. The company’s competence is clearly manifested, among other things, in the transformation of the automotive industry, where Bosch is one of the few companies that are able to create a “software defined vehicle” and offer holistic solutions from actuators to sensors and vehicle computers to software programming, including “middleware” ( it allows software and actuators from different manufacturers to be interconnected and frees up the hands of car designers by stacking modules and functions in the car regardless of their compatibility).
We also develop software in the Czech Republic, test the use of AI in production, in many cases also for other group companies around the world. We try to understand and learn new things on pilot projects. We can usually also implement them with a reasonable economic return.
The České Budějovice Bosch branch was chosen in 2023 within the Bosch concern as a pilot project and in the following years as a competence center (center of excellence) for the introduction of a new production and logistics digital platform (Mobility area) built on the latest information technologies.
As for already implemented AI-based projects, the Czech Bosch has developed, for example, an affordable automatic optical inspection that uses machine learning algorithms. He also developed innovative software for daily use in the production, technology and logistics areas, using the digital twin image of the operator/machine operator (the AVATAR project). Bosch collaborated with the Faculty of Mechanical Engineering at CTU on data analysis with the help of AI, the result of which is a system that – to put it simply – can predict the result, which leads to the shortening of long-term tests of the product being developed.
A new project launched this year, in which the Czech Bosch participates, focuses on semantic communication and information processing from industrial machines using machine learning for mobile networks of the sixth generation (6G). of 6G research – the French EURECOM University and the Finnish University of Oulu. Testing of the proposed solution will be carried out directly on the premises of the Robert Bosch plant in České Budějovice.
Generally speaking, artificial intelligence has great potential in increasing efficiency in administrative tasks, planning and production management. We are working on various chatbots in the field of taxes and law – for example, there is a goal and a solution that will support the creation and review of new contracts. In production planning, the potential for use is even wider, from sales prediction to optimization of production flows. Of course, you need to proceed with caution.
Milan Šlachta, representative of the Bosch Group in the Czech Republic and Slovakia: Leaving aside the applications and services where elements of artificial intelligence already support each of us today, I use AI more for the creation and search of texts and information, sometimes also for the creation of graphic content. I also use the internal AskBosch application, which runs on Microsoft GPT-3.5 Turbo. But it is only the beginning, so far the use of AI is not very evident in my work.
AI Act: A hidden opportunity
Is the European regulation on artificial intelligence just another annoying regulation, or can it be a catalyst for development in companies? Djalel Benbouzid, AI Governance Senior Manager at Volkswagen leans towards the second option, seeing the AI Act as more of an opportunity. We present his commentary, which he prepared exclusively for the Czech auto industry.
The recently passed European Union law on artificial intelligence has made waves in the tech industry. At first glance, it may appear to be yet another regulatory hurdle – a 144-page document full of technical requirements and potential penalties. But if you take a closer look at it, you’ll find that it’s actually a hidden opportunity.
Yes, there are costs involved. Companies will need to invest in new processes, documentation and possibly staff to ensure compliance. But there’s one thing: most of these costs are bundled in at the start. These are mainly fixed input costs: setting up new processes, creating templates for documentation, introducing supervision/monitoring mechanisms. But once these systems are in place, ongoing maintenance costs are relatively lower.
Importantly, the technology community is already mobilizing to support implementation. Experts share insights on compliance strategies, and it is hoped that documentation templates will emerge to streamline the process. This joint approach could significantly reduce the burden on individual companies, especially smaller ones.
Positive side effects
More importantly, many of the Act’s requirements will have positive spin-offs for companies. Take transparency, for example. The law promotes clear documentation and explainability of artificial intelligence systems. It’s not just about compliance, it’s about creating institutional knowledge. When you’re forced to clearly document how your AI systems work, you’re also creating a valuable knowledge base for your organization. This can improve collaboration, speed onboarding of new team members, and even spur new innovation as people get a clearer picture of your AI capabilities.
There is also an emphasis on quality and risk management. The law requires companies to carefully think through potential harms and implement safeguards. This may slow development at first, but in the long run it is likely to lead to more robust and reliable AI systems. And that’s a valuable asset in a world where AI failures can lead to reputational damage and loss of customer trust.
How AI visualizes AI Act. | Photo: generated by DALL·E
The law also enforces human oversight of AI systems. While this may seem like an additional burden, it is actually an opportunity to bridge the gap between technical teams and subject matter experts. By involving people more deeply in the AI process, companies can create systems that are not just technically impressive, but truly useful and aligned with business needs.
Perhaps most influential are the law’s requirements for data management and model evaluation, which could lead to unexpected insights. As companies dig deeper into their data and rigorously test their models to meet regulatory requirements, they may uncover patterns or opportunities they previously overlooked.
The regulation will not suppress innovation
Critics say the law will stifle innovation and disadvantage European companies. However, this view may be short-sighted. By setting clear rules of the game, the act could actually encourage innovation by creating a trusted environment for AI development. It could encourage companies to focus on creating AI that is not only powerful, but also ethical and reliable, qualities that could become a significant competitive advantage in the global marketplace.
“The AI Act is not the end of innovation, but the beginning of a new, more advanced era of artificial intelligence development.”
Additionally, the law’s risk-based approach means that many AI applications will face minimal regulation. The law’s focus is only on high-risk systems where the potential for harm justifies stricter oversight. This targeted approach should allow rapid innovation to continue in many areas of AI while ensuring adequate safeguards are in place where they are most needed.
It’s worth noting that compliance costs for high-risk systems should be a fraction of the total development cost. The health sector serves as a good example here. Many of the Act’s requirements are already met in the industry through existing regulations, and healthcare companies are likely to be among the first to seamlessly adapt to the AI Act.
The law also addresses growing AI concerns in a timely manner. The current hype around AI has led to over-promising, which in turn has created unrealistic expectations. As these expectations inevitably go unfulfilled, there is a risk of undermining user trust. The AI Act provides an opportunity to restore that trust and push for the responsible use of this promising technology.
The priority is to designate an administrator
The most pressing deadline in the law is February 2, 2025. By that date, companies must decommission all AI systems that fall under the prohibited practices listed in the law. This is no small task. It requires a comprehensive inventory of all AI systems across the organization, an assessment of their level of risk, and eventually the retirement of some systems.
Therefore, the most urgent priority for companies should be to establish an administrator/manager of AI systems within the entire organization. It’s not just about compliance, it’s about getting a clear view of AI capabilities, risks and opportunities. It’s about turning a regulatory necessity into a strategic advantage.
AI asset managers should be tasked with cataloging all AI systems, assessing their level of risk, ensuring proper documentation, and coordinating compliance efforts across departments. But beyond that, they should be empowered to identify synergies, uncover redundancies, and drive AI strategy across the organization.
As the person responsible for defining the AI governance framework for a major automotive group, I can attest to the benefits of structuring and standardizing best practices across the organization. The compliance-by-design process is already delivering positive results, uncovering hidden efficiencies and supporting a more holistic approach to AI development.
Catalyst of transformation
In conclusion, the EU AI law is more than just a regulatory challenge, it is a catalyst for organizational transformation. It pushes companies to develop AI more thoughtfully, transparently and strategically. Those who embrace this challenge and see it as an opportunity rather than a burden will be well-positioned not only to be compliant, but to lead the way in the future of AI.
The next few years will be crucial. As companies race to meet the 2025 deadline, they will be laying the foundations for their AI strategies for years to come. The winners will not just be those who tick all the regulatory boxes, but those who use this moment to build more robust, transparent and valuable AI systems. The AI Act is not the end of innovation, but the beginning of a new, more advanced era of AI development, in which ethical aspects and social impact are an integral part of technological progress.
Contact
Next articles and interviews
Next articles and interviews
+ Show