The ethical AI playbook 2.0

Part 2: Risk Management and Data Security

The ethical AI playbook 2.0

An ethical playbook for artificial intelligence for the real estate and construction sector compiled by the Building Information Foundation RTS and A-INS Group. The purpose of the updated playbook is to support actors in the built environment in utilizing artificial intelligence, and to promote sustainable new technologies in the sector.

“How to mitigate the risks in adopting AI?”

Information security, risk management, and best practices in organisations

The growing volume of data and information in the AECO sector brings significant opportunities but also risks and challenges. The amount of data collected in our society is increasing rapidly. Data is gathered from the built environment, from construction projects, and from the people involved in them. For example, real-time data collection on construction site progress is becoming increasingly common. At the same time, these data are being used more extensively for various purposes, such as training continuously evolving AI systems. This development provide many positive possibilities, but also introduce risks and challenges that organisations must be increasingly prepared to address.

In this section, we describe what organisations in the AECO sector should consider to prevent challenges associated with adopting and utilizing AI. We focus particularly on information security, identifying key risks, and organisational best practices when AI is used ethically. A capable organisation keeps itself informed about the latest developments in AI (and the regulations surrounding it) thereby enabling it to anticipate and respond to potential challenges and risks.

Secure use of AI is part of comprehensive information security practices

Information security refers to protecting data from unauthorized access, viewing, use, modification, or damage. In a secure environment, information is accessible to authorized parties when needed. Information security is also closely linked to privacy and an individual’s right to decide how their personal data is collected, stored, shared, and utilized. As Liang et al. (2024) note, the increasing digitalization of the AECO sector and the growing use of AI continuously introduce new requirements for information security covering the protection of data related to information systems, physical machines and devices, people, buildings, and infrastructure. In project-based business, these skills become even more critical as project stakeholders increasingly adopt new digital tools.

When considering information security in connection with AI use, the following principles should be addressed. Ensure that these principles are applied not only within your own organisation but also by the providers of any AI solutions you use.

 

Collect and store only the amount of data necessary for the intended purpose. This reduces the likelihood of misuse. In addition, anonymising and pseudonymising data can limit the unnecessary storage of identifiers linked to individuals.

Inform individuals about how data related to them is being used and obtain consent for its use. Communicate transparently how you intend to process their data. Ensure that individuals understand their rights as well as the practices related to data collection, use, storage, and access. Obtain written consent for the use of their data and provide individuals with the option to withdraw that consent. Likewise, when using AI applications yourself, make sure you understand how your own data is being used – prefer solutions that are transparent in their operations.

 

Ensure secure storage and transfer of data and monitor access rights. Use appropriate tools and methods for encrypting and protecting data. Make certain that data access is restricted only to those who need it at a given time for a specific purpose.


Use only tools that are verified to be secure, and do not assume that information you provide to an AI system will remain protected
. If there is no reliable information or explicit confirmation of security practices, it is safer to assume that an AI system is not secure. Beyond sensitive information, even seemingly harmless data can create security risks. For example, questions posed to a language model or conversations held with it may, in aggregate, reveal a great deal about an individual and their work tasks. In the AECO sector, highly critical information is handled at many levels, and such data can leak indirectly, for instance, in the manner described above.

 

Maintain proper documentation of AI system use. Comprehensive documentation increases transparency and trust and enables retrospective evaluation of how the AI system has been used. Also ensure that the documentation itself is handled in a secure manner.

 

Make sure you comply with up-to-date ethical regulations and legal requirements. Examples include the GDPR (General Data Protection Regulation), which applies in EU countries. Also pay attention to how partners and vendors operating in different regions implement and comply with these regulatory requirements in their own activities.

 

 

Ensure the continuous implementation of ethical practices, for example through audits, and commit to ongoing monitoring and improvement. Due to the rapid development of AI, enabling information security and ethical conduct requires continuous review and refinement of practices. This may include, for instance, regularly reassessing the AI solutions in use in terms of the accuracy of their outputs, the appropriateness of their use, associated risks, and regulatory compliance.

 

 

 

Take information security into account even after you stop using an AI system. In addition to managing security during AI deployment and use, practices should also be defined in advance for the period after an AI solution is not being used anymore. It is important to determine how and by whom collected data will be stored and managed after use ends, and to agree on procedures for data destruction.

 

Ensure information security across partner and vendor relationships and for the overall AI system. Make sure that your partners and service providers also follow secure information practices – the security of the supply chain is only as strong as its weakest link. Agree on clear rules and responsibilities for maintaining secure operations. For example: who is responsible for collecting and storing data? Continuously ensuring these aspects is particularly important in project-based work, where partners can change frequently and rapidly.

 

In the end, humans are always responsible for AI-driven decisions. As Weber (2021) notes, even if AI plays a role in producing outputs, responsibility for actions, decision-making, and final outcomes (and their consequences) rests with people.

Risk management examines ethical issues in a diverse manner

The use of AI involves risks, and the importance of addressing them is heightened by the rapid pace of technological development. As Futurice (2024) notes in its guide, organisations that use AI must be able to balance fast and agile adoption with thorough risk management. On the other hand, to ensure that ethical principles can be upheld sustainably, organisations must be capable, from the very start of AI adoption, of identifying, assessing, and minimizing potential risks. In the long run, comprehensive risk management is not an obstacle to rapid progress; in fact, it is a prerequisite for it. Even a realisation of a single significant risk without management can severely slow down AI adoption or stop it totally.


KTI Kiinko report (2024) and Bolpagni et al. (2021) raise potential risks related to the use of artificial intelligence in the AECO sector:

  1. Data breaches and cyberattacks, misuse of sensitive information, and copyright violations.
  2. Human error and negligence in data handling.
  3. AI outputs that are inaccurate, incorrect, biased, or discriminatory. Insufficient transparency to allow critical evaluation of results, and overreliance on AI-generated outputs without proper scrutiny.
  4. Poor-quality, incorrect, or even malware-contaminated data, where biases, gaps, or risks are not properly understood or accounted for.
  5. AI’s limited ability to produce solutions that are meaningful for business, as well as organisational (leadership) shortcomings in leveraging AI effectively.
  6. Uneven distribution of AI benefits between organisations and individuals.
  7. Insufficient sector-wide competence in the ethical use of AI, rapid changes in job roles, and potential skills gaps.

Note! These risks are illustrative—they are not comprehensive. The purpose is to prompt the reader to consider the types of risks that are important to identify and address when using AI. Due to the rapid development of AI, risks and their impacts must be assessed continuously, and it is neither practical nor advisable to rely on static, all-encompassing risk lists.


Examples of how AI-related risks may materialize in AECO sector operations

Project Manager: A construction project manager has heard about a new AI system that enhances site operations by collecting and analysing real-time data from the construction site. Project manager decides to take advantage of the AI vendor’s trial period and instructs the production organisation to deploy the system on site immediately.

 

Risk: The project manager has not ensured the AI system’s information security, verified that data is collected, used, and stored ethically, nor obtained consent from the individuals whose data is being collected (example risks 1 and 2). At worst, this may lead to project data leaking and the misuse of sensitive information belonging to project stakeholders. By addressing these risks, the AI system’s deployment becomes slightly more laborious and slower, but it helps ensure secure system use.

Designer: A structural designer uses an AI-based dimensioning tool in detailed design, which speeds up the design process significantly. Because the tool has been used successfully in several previous projects without detected errors, the designer gradually begins to spend less time manually checking the final output.

Risk: The designer places too much trust in the AI-generated solutions (example risk 3). This may lead to significant deficiencies in the final output, which can have major negative impacts on safety during construction and use. Considering this risk may slow the design work slightly, but it helps avoid unacceptable safety-related consequences.

Construction project consultant: The construction project consultant has become a new player in the construction project as a subcontractor. They decide to accelerate their work by using an AI assistant in the project’s remote meetings, which records the discussion and generates notes. A preparatory discussion held before an upcoming meeting is also recorded by the AI assistant, which then shared its content with individuals who should not have received it.

Risk: Using AI assistants to create meeting notes is an easy and common use case, as the construction sector, like other fields, holds many meetings. Current advanced language models usually perform this task at least adequately. However, this use case involves a data leakage risk, and several such incidents have already been seen, as well as matters that should have been agreed upon in advance. There are many solutions for producing meeting notes, but their information security is not always checked.

Rebar manufacturer: A company manufacturing prefabricated steel reinforcement elements monitors its production line with an AI-based quality-management system. The system approves a structure that is dimensionally accurate and produced according to the processes and components included in the production description.

Risk: The design criteria have not been updated in the AI model used in quality management, and reinforcement products whose technical characteristics do not meet the project’s quality requirements can leave the production line. The error is discovered only during installation as incorrect sizing, or during use as structural damage.

Site Supervisor: A contractor’s site supervisor has, during a busy work week, started using an AI-based system that automatically generates a report of the site’s quality and safety observations based on helmet-camera video recordings. Because information-security practices have been carefully agreed upon, the supervisor is excited about the productivity benefits the system will bring to their work.

Risk: Even if the AI system’s information security has been ensured and the data it uses is collected by the organisation itself, the AI-generated report may still be incomplete, inaccurate, or biased (example risks 3 and 5). Using such a report without critical review may lead to deficiencies in site safety. By considering these risks, the AI system can be used effectively as support for reporting and decision-making, provided that the site supervisor personally checks and approves the final output.

Property owner: A property owner enhances their leasing operations with an AI tool that supports rental pricing and optimises the tenant-selection process.

 

 

Risk: Although when implemented properly AI can improve leasing operations, it also involves significant risks, particularly in the handling of sensitive information (risk 1) and potential discrimination (risk 3). The data used to train the AI model may be biased, causing the model to unjustifiably favour certain tenant candidates. This risk must be considered particularly carefully in situations where even small biases can significantly influence decision-making.


As with information security, AI should not be viewed as a separate element in risk management but as an integrated part of the organisation’s overall risk management framework. Since AI-related risk management is fundamentally governed by general risk management methods and processes, this playbook does not go into those in detail; instead, we encourage the reader to refer to their own organisation’s risk management processes and guidelines. However, the risk management process should at minimum consider the nature of the risk and its potential impact, the likelihood of the risk materializing and the severity of that impact, as well as the measures for managing the risk and the allocation of related responsibilities.


Adopting AI in organisations

The adoption of AI in organisations is both a technical and a social process. In addition to the information-security and risk-management considerations presented above, organisations should pay particular attention to training and change-support practices when introducing AI.

Information-security practices should be built into the organisation, not added on top

Information security related to AI should not be viewed as a separate, standalone component, but as an integral part of the organisation’s overall information-security framework. Secure practices should be the default assumption. Without them, AI cannot be used ethically. Since an organisation’s security is only as strong as its weakest link, it must also ensure that employees have sufficient understanding of information security and AI-related risks. Both secure systems and their secure use are therefore essential.

Continuous training and expanding understanding of AI’s possibilities and risks

All employees should have the basic skills and knowledge needed to use AI, as well as an understanding of the ethical and especially information-security implications connected to it. This shared understanding of foundational skills and common rules should be continuously maintained. Put simply: anyone who interacts with AI solutions in their work—which today and in the future applies to practically every employee regardless of role—should understand the organisation’s rules for AI use.

Shared understanding can be strengthened not only through training but also by codifying a common AI foundation into, for example, an organisation-wide AI policy. Such a policy may include basic guidance on secure AI use, a statement on which AI services are permitted, definitions of responsibilities, and instructions for dealing with undesired situations.

In addition to building general awareness, employees should be equipped to handle AI ethically within the context of their specific roles. Management and people working in HR and financial administration and legal functions should also understand the key elements of AI for their own roles. While leaders need to understand the organisational-level changes triggered by AI adoption, HR must understand changes in job roles and training needs, and legal teams must understand how AI affects intellectual-property considerations.

Broad participation also helps the organisation understand and develop ethical AI practices more comprehensively; it enriches discussions throughout the organisation and strengthens a culture of ethical behaviour. Encouraging and rewarding employees for ethical conduct further supports the right behaviours. This may include allocating resources to enable ethical practices, recognising those who raise concerns about risks or unethical behaviour, and ensuring that incentive models do not conflict with ethical expectations.

 

Provide support for continuous change and acknowledge the stress that AI may create for individuals.

Keeping up with the rapidly evolving AI landscape requires effort even from the most experienced professionals. While this can be exciting, it can also lead to stress and feelings of inadequacy or uncertainty.

Whose work will change and how? Who is allowed to use the newest AI tools, and in what ways? Does the increasing use of AI create risks of unequal treatment or unbalanced workload distribution among staff? These are normal questions that will arise. Addressing them transparently helps the organisation to reduce the uncertainty and strain associated with AI adoption.

As AI usage increases, another challenge emerges: growing workload. If the time saved by automating routine knowledge-work tasks is immediately filled with tasks that demand high cognitive effort, overall strain may increase. Luoma (2024) highlights that although AI might seem to ease work, in the long term it can actually raise workload if the pressure to fill the working week with more cognitively demanding tasks grows.

The simple fact that AI is used in an organisation or project can itself be burdensome. For example, if AI systems increasingly collect and analyse data about individual work performance without sufficiently clear rules, employees may experience the situation as stressful and uncertain.

AI may also trigger fears related to rapid changes in job roles or even job loss. Therefore, supporting individuals in developing their skills, while also building realistic understanding about the pace and nature of job-role changes, is essential.

Organisations should support employees by giving them the resources and capacity to navigate constant change, for example by offering sufficient time and support for learning and adapting. It is also important to acknowledge that individuals have different abilities and motivation levels when it comes to continuous learning. Similarly, it must be recognised that adopting new ethical practices takes time and adjustment. Instead of using all productivity gains from AI to increase high-cognitive-load tasks, could some of that saved time be used for recovery, or for building change readiness—such as learning new skills? Consistency in AI governance also brings clarity and reduces uncertainty about AI use. Moreover, maintaining an ongoing dialogue about AI is an excellent way to ease concerns and dispel fears within the organisation.

Data security practices should be built into the organisation, not added on as an afterthought – for example, inheritance of data access rights

When deploying AI systems for a company or a project, it must be ensured that they operate in accordance with the organisation’s existing access-rights policy when handling files and generating outputs. In practice, this means that the AI must not present outputs to a user if the underlying content used to generate those outputs includes information from documents to which the user does not originally have access rights. This can be ensured through appropriate technical requirements and testing together with the AI provider.

For example, consider a situation where a person with broad access rights uses an AI system to create reports or in-depth analyses. A dataleak risk arises when these analyses are shared with others: the original AI user may find it very difficult to know whether all recipients have the right to access all the source material the AI used to generate the output. Verifying this can be extremely challenging, as the final output may contain individual sentences or paragraphs derived from multiple documents, without necessarily matching any single source verbatim.


Future Outlook

The rapid pace of AI development requires continuous evolution of practices

As the volume of collected data and information continues to grow, the rapid development of AI and AI-enabled applications likewise demands that organisations continuously update and refine their practices. While the newest solutions may bring business advantages, their adoption must be matched with equally fast efforts to ensure secure use and to address emerging risks. Below, we highlight two development trends that organisations should be aware of from the perspectives of security, risk management, and organisational implementation.

The first development trend — which is already partly a reality — is the increasing sophistication of criminal activities enabled by advances in AI. Criminals are leveraging AI’s capabilities at an accelerating rate. This requires heightened vigilance toward, for example, phishing attempts, both at the organisational and individual level. Highly convincing multimodal scam messages (using text, video, images, and audio — either separately or combined) will become even more common. For instance, highly realistic voice and video deepfakes of company executives may be created to persuade employees to share sensitive information. At the same time, AI tools enable the automation and faster execution of cybercrimes, which means that responses to security breaches must also become faster and more proactive.

The second potential future development trend is the increasing presence of AI-driven robots as part of organisational and team operations. Public discussion about robotics has been in silence.  Large language model–based AI systems and semi-autonomous robots have nevertheless advanced rapidly for example in performing supporting tasks on construction sites, such as measurements and markings. At the same time, the development of AI further accelerates advancements in robotics, and in the future these technologies may become an even more integral part of construction site operations and property maintenance.

The deeper integration of AI and robotics in organisation means  entirely new challenges. How should an AI agent (see also part 5) be onboarded into an organisation’s operating practices, and how can its understanding of the organisation’s ethical guidelines be ensured? How will the increased use of robotics impact site safety practices? As Poikola et al. (2024) note, interaction between humans and AI is likely to deepen significantly in the future. While this offers new opportunities, for example through more intuitive and approachable user experiences, it also creates challenges, particularly in managing increasingly immersive and multimodal information that requires careful attention to security and data protection.


Consider at least these!

The adoption of AI in organisations is both a technical and a social process. Train your personnel, continuously build your organisation’s understanding of AI’s risks and opportunities, and monitor how AI adoption affects employee workload. Because AI is evolving rapidly, skills and competencies must also be kept continuously up to date. Ensure a realistic understanding of both the positive and negative potential developments across the organisation. In the future, AI and robots may become an even more integrated part of operations and teams in the AECO sector, which may affect, for example, how technology and humans interact and how this interaction deepens over time.

In AI-related information security, you should consider at least the following principles.

Make sure these principles are followed in your own operations, and that the AI solutions you use also adhere to them.

 

  • Collect and store only as much data as is necessary for the intended purpose.
  • Inform individuals about how data related to them will be used, and obtain their consent.
  • Ensure secure storage and transfer of data, and monitor access rights.
  • Use only tools that have been verified as secure, and do not assume that information given to an AI system will automatically remain protected.
  • Maintain proper documentation of how the AI system is used.
  • Ensure compliance with up-to-date ethical regulations and legal requirements.
  • Ensure the continuous implementation of ethical practices—for example, through audits and commit to ongoing monitoring and improvement.
  • Consider information security even after the AI system is no longer in use.
  • Ensure the security of partner and supplier relationships, as well as the overall security of the AI system.
  • Remember that, ultimately, humans are always responsible for the decisions made with AI.
  • Discuss and agree in advance on the use of AI assistants in projects, both for information security reasons and to maintain a constructive atmosphere in future meetings.

Risk management examines ethical issues broadly, taking into account one person own activities but also the whole community. Potential risks related to the use of AI include, for example:

  • Data breaches and cyberattacks, misuse of sensitive information, and copyright infringements.
  • Human errors and negligence in data handling.
  • AI outputs that are inaccurate, incorrect, biased, discriminatory, or lacking transparency; placing too much trust in AI-generated results without critical evaluation.
  • Low-quality, incorrect, or even malware-infected data, whose biases, gaps, or risks are not properly understood.
  • AI’s insufficient ability to produce solutions that are meaningful for the business, as well as organisations’ (particularly management’s) limited ability to leverage AI effectively.
  • Unequal distribution of AI’s benefits between organisations and individuals.
  • Insufficient competence within organisations and across the sector in the ethical use of AI; rapid changes in job roles and potential skill gaps.
  • Ask the AI system for the source of the information it provides, and evaluate that source with your own critical judgment.

Read More

Futurice, Generative AI: From use cases to enterprise-wide scaling and transformational change, 2024, used10/2024.

In Finnish: Kiinko, KTI, Mistä KIRA-ala puhuu 2024: Tekoälyn vaikutukset kiinteistö- ja rakennusalaan, 2024.

Liang, C. J., Le, T. H., Ham, Y., Mantha, B. R., Cheng, M. H., & Lin, J. J., Ethics of artificial intelligence and robotics in the architecture, engineering, and construction industry, 2024, Automation in Construction162, 105369.

In Finnish: Luoma, J., Viisi asiaa, jotka jokaisen tulisi ymmärtää tekoälystä työelämässä, Uutinen, Aalto-yliopisto, 2024.

Weber-Lewerenz, Bianca, Corporate Digital Responsibility in Construction Engineering – Construction 4.0, (2021).