The ethical AI playbook 2.0

Part 5: Good Use Case

The ethical AI playbook 2.0

An ethical playbook for artificial intelligence for the real estate and construction sector compiled by the Building Information Foundation RTS and A-INS Group. The purpose of the updated playbook is to support actors in the built environment in utilizing artificial intelligence, and to promote sustainable new technologies in the sector.

“How to identify a good use case for AI?”

The current state of AI use cases

As presented in the overview section, artificial intelligence has in a short period of time emerged as one of the most significant areas of interest in the AECO sector, while the industry also has clear potential to benefit more broadly from the use of AI. There is likewise no shortage of identified use cases in the sector—quite the contrary.

Although the strong enthusiasm for identifying and testing various AI use cases is largely positive, you should keep in mind the pitfalls while deploying AI. First, many unprioritized use cases can lead to a lack of focus. As a result, there may be insufficient resources to advance the most promising use cases, or investments in them may be overshadowed by mediocre ideas. Second, excessive eagerness may lead to poorly considered and unethical experimentation, for example when issues related to data security are not examined with sufficient care. In this section, we aim to provide guidance on avoiding these pitfalls: we present the key characteristics of a good use case as well as guidelines for prioritizing use cases. Finally, we offer recommendations for implementing the localization of AI in the AECO sector.

Identifying and Prioritizing a Good Use Case

In the situation picture, we presented potential AI use cases, and in previous Part 4: Business efficiency and new business model section the potential opportunities enabled by artificial intelligence. In this section, we highlight the key characteristics of a strong use case that is connected to business value, as well as how one might approach prioritizing their own use cases. Seemingly good and inspiring use cases are often easy to come up with, especially with the continuously expanding range of AI applications.

Reviewing the checklist for a good use case from the AI for built environment online course, the following aspects should be considered when evaluating the viability of a use case.

 

A good use case for leveraging artificial intelligence can be identified when:

  • There is clear business-driven demand for the use case, and it delivers tangible added value.
  • The benefits of the use case can be measured or convincingly justified.
  • The use case is sufficiently specific, and its scope can be clearly defined.
  • There is enough high-quality data available to implement the use case, or such data can be collected with reasonable resources.
  • The use case is technically and organizationally feasible and it can be realistically implemented in practice, and the organization is ready to utilize it at this time.
  • Ethical risks related to the practicality and potential benefits of the use case, such as the possibility of generating incorrect information, have been taken into account.
  • The use case adheres to generally accepted ethical practices.

In general, AI performs well in text-to-text tasks, provided that the training data corresponds to the intended use. According to Ghimire et al. (2024), such tasks include analysis and synthesis of materials, production of communication content, generation of draft and example images during the design phase, as well as schedule forecasting and optimization tasks. According to Ghimire et al. (2024), AI systems do not produce sufficiently reliable information for problems that require a broad understanding of the operating environment and experiential knowledge. These include, for example, challenging urban and structural design tasks, as well as complex mathematical calculation and dimensioning tasks. The situation reflects research from 2024, and due to rapid technological development, it may change.

Once a good use case has been identified and it meets the criteria outlined in the checklist above, there may still be more potential use cases available than the organization is able to implement. Next, use cases can be prioritized, for example, using the following perspectives

Ghimire, P., Kim, K., & Acharya, M. (2024). Opportunities and Challenges of Generative AI in Construction Industry: Focusing on Adoption of Text-Based Models. Buildings, link: https://doi.org/10.3390/buildings14010220

The Use of Artificial Intelligence and Environmental Responsibility

The energy consumption of AI systems and the associated carbon dioxide (CO₂) emissions have become a concern for responsible organizations. Such emissions data are often not disclosed on a use-case-specific basis by providers of AI solutions. It is therefore advisable to raise this issue proactively and request service providers to assess the climate impacts of their solutions. When using well-known large-scale cloud service providers, it is also possible to choose the location of the data center, allowing organizations to favour Nordic countries due to their cooler climate and energy production methods.


Prompting Artificial Intelligence

Generative artificial intelligence based on language models always requires a prompt to operate. Prompting refers to the process of instructing a generative AI system by specifying what information is desired and in what form the output should be presented. Ronanki et al. (2023) define the prompting of AI systems as a process in which a series of inputs is provided, and the output is continuously refined. A well-constructed prompt reduces the risk of incorrect information and improper output formatting.

According to Kallio (2025), the characteristics of a good AI prompt include:

  1. The prompt provides clear background information so that the AI understands the context in which the information is being requested.
  2. The prompt defines clear objectives.
  3. The prompt sufficiently constrains and specifies the expected response.

Research on AI prompting is still in its early stages, and there is limited research-based guidance on optimal prompt structures. One way to approach an effective prompting process is the RISEN model proposed by Maghani et al. (2024), where:

R = Role: Specify the role in which you expect the AI to operate, for example: “you are an expert in energy monitoring for property owners.”

I = Instructions: Define the task for the AI, for example: “your task is to monitor a building’s energy consumption using sensor 1, which measures supply air temperature, and sensor 2, which measures exhaust air temperature, and to compare these with the energy consumption forecast.”

S = Steps: Describe the steps the AI should follow, for example: “retrieve the building’s energy simulation, check the heating degree days, and retrieve historical sensor data.”

E = End goal: State the desired end state of the output, for example: “I want to see the actual and forecasted heating demand of my building.”

N = Narrowing: Define the constraints, for example: “I want to know the actual and forecasted energy consumption for week 13 and its comparison with heating degree days.”

Ronanki K, Cabrero-Daniel B, Horkoff J, Berger C (2023). Requirements Engineering Using Generative AI: Prompts and Prompting Patterns. Link: https://arxiv.org/pdf/2311.03832

In Finnish: Kallio, S., 2025, Näin teet hyviä kehotteita – Promptaus opas (2025), link: https://santerikallio.com/promptaus-opas/

Maghani, S., 2024, Prompt Engineering, Explained, link: https://medium.com/electronic-life/prompt-engineering-explained-3b83ba347722

Examples of how AI use cases can be prioritized

Once a good use case has been identified and it meets the criteria outlined in the checklist above, there may still be more potential use cases available than the organization is able to implement.

Next, use cases can be prioritized, for example, using the perspectives outlined below.

a. Improvement relative to the required investment

How much added value does the AI-based use case deliver in relation to the resources invested?

b. Ease of adoption and maintenance

How easily and efficiently can the use case be implemented in operations, and how well can it be sustained over time?

c. Technical readiness

Are the necessary technological capabilities already in place, or is there a need to develop new skills or capabilities in-house or through external services?

d. Fit with existing or new business areas

Is the use case targeted at current markets and does it support existing business operations, or does it address an entirely new market, potentially involving completely new business activities?

e. Business and ethical risks

How likely are the potential risks associated with the use case, and how significant are their impacts in relation to the benefits gained?

 

The perspectives outlined above do not directly provide a definitive answer as to whether a use case is clearly good or bad. Rather, the portfolio of use cases that emerges from the prioritization process should be balanced and reflect the organization’s business objectives. For some organizations, it makes the most sense to emphasize small and low risk use cases; for others, the ambition to be a frontrunner increases their tolerance for risk. A third approach balances use cases that can be implemented immediately with those that represent new opportunities for the future.


Suggestions for implementing localisation

Overall, successful localization supports the adoption and responsible use of artificial intelligence at the local industry level and makes it possible to account for sector-specific characteristics, such as local regulations, the roles of different stakeholders, and the use of professional terminology in AI-generated outputs. As the data content of the AECO sector is substantial in scope (as discussed in the situation picture section), the industry has at least strong prerequisites for localisation and for meeting the requirements outlined below.

 

At the industry level, successful localisation requires consideration of the following aspects:

  • The development of shared rules of engagement, standards, and data repositories that are compatible with existing standards (such as IFC in BIM design). In the long term, this enables the formation of a local AI ecosystem.
  • Clear definition of responsibilities across different areas of localisation, such as language versions, data sources, or regulatory adaptations. These needs should be identified and prioritised systematically so that resources can be focused on areas that deliver the greatest benefit.
  • Collaboration among different stakeholders through shared data exchange, for example, and joint training initiatives supports the adoption of practices as common operating models. Effective collaboration also helps increase overall industry understanding and joint acceptance of best practices and ethical approaches, enabling their diffusion to smaller actors as well.

At the organizational level, successful localisation requires consideration of at least the following aspects:

  • Organisational capability to commit to shared practices and readiness for change to operate in accordance with them. This may require, for example, capabilities for technical integrations and effective management of the organization’s own data infrastructure.
  • Although focusing solely on organization-level localisation may in some cases enable a competitive advantage over other actors, it is often sensible to link it to industry-level localization due to the workload involved and to enable economies of scale.
  • Describing the organization’s development path in the form of an AI strategy or roadmap helps steer development toward business objectives, systematize localization efforts, and clarify the organization’s role in the AI landscape. Support for developing these can be found, for example, here: https://kirafoorumi.fi/kiraalya/ (in Finnish)
  • In addition to the organizational level, localisation can also be implemented at the project-type and project level. For example, project types requiring extensive specialised expertise or multi-year megaprojects may benefit from tailored localization.

Future Outlook

Point-to-point solutions will remain part of everyday work, supported by more comprehensive solutions

Point-to-point solutions will be complemented by more comprehensive solutions. As noted in previous sections, the rapid pace of current AI development makes it challenging to predict its direction of change even in the short term. While the definition of a good use case can be expected to remain relatively stable, the use cases themselves may change significantly and rapidly.

One expected development trend is the further proliferation of point solutions aimed at improving personal productivity in the everyday work of AECO sector professionals. AI is likely to be increasingly used to support the streamlining of specific task components, such as detailed design, scheduling, or the preparation of meeting minutes. Over time, the use of such tools will become a basic prerequisite for maintaining competitiveness. While individuals are responsible for applying best practices in their own daily work, organizations should support this activity so that identifying the best use cases or ensuring their ethical use does not rest solely on individual employees. At the same time, cross-sector localisation efforts further support the use of AI solutions in tasks that require domain-specific expertise.

Another expected development trend is the expansion of use cases from point-to-point solutions toward more comprehensive solutions, where different functions and individual use cases are more tightly and potentially automatically integrated with one another. This development is closely associated with AI agents, which may significantly enhance the utilisation of AI in the near future. AI agents can independently connect different applications and databases to perform assigned tasks. Multiple agents with different capabilities can also collaborate to carry out tasks together. While this trend strongly supports productivity gains enabled by AI, it also introduces an entirely new level of complexity in ensuring ethical principles—for example, greater attention must be paid to data security as well as to the explainability and transparency of use cases.

 

Consider at least these!

It is important to internalise that consideration of ethical practices should be an integral part of the use case identification process. Although ensuring ethical practices requires preparatory work, taking them into account in the long term is not an obstacle but rather the foundation of operations.

When you consider all the aspects presented in the previous chapters, you have the essential building blocks of a good use case:

  • Ensure the transparency, confidentiality, and explainability of the AI system.
  • Consider the energy consumption of AI systems and the associated carbon dioxide (CO₂) emissions.
  • Collect and store only as much data as is necessary for the intended purpose.
  • Inform individuals how data related to them is being used and obtain consent for its use.
  • Ensure secure storage and transfer of data and monitor access rights.
  • Use only tools that have been verified as secure, and do not assume that information provided to an AI system is automatically protected.
  • Maintain proper documentation of the use of the AI system.
  • Ensure the legal basis of the AI system and the data it uses and establish clear responsibility and accountability mechanisms.
  • Ensure compliance with up-to-date ethical regulations and legislation.
  • Ensure the continuous implementation of ethical practices, for example through audits, and commit to ongoing monitoring and improvement.
  • Consider information security also after the AI system is decommissioned.
  • Ensure the security of partner and supplier relationships and the overall AI system ecosystem.

  • In the end, humans are always responsible for the decisions made by AI.
  • Ensure that there are shared rules of engagement for using the use case, and that these are clearly communicated.
  • Ensure that employees have the necessary competencies to use the use case, considering the requirements of their specific roles.
  • Involve employees in the development of the use case and in the continuous improvement of ethical practices.
  • Ensure that the use of the use case does not place employees in an unequal position or excessively burden individuals directly or indirectly involved in the use case.

Read more

Ministry of environment, KIRAHub & Minna learn, AI for built environment – webcourse, link: https://courses.minnalearn.com/en/courses/ai-for-built-environment/ , used 10.10.2024.

Faiz, A., Kaneda, S., Wang, R., Osi, R., Sharma, P., Chen, F., & Jiang, L. (2023). Llmcarbon: Modeling the end-to-end carbon footprint of large language models. Link: https://arxiv.org/abs/2309.14393

Fu, Z., Chen, F., Zhou, S., Li, H., & Jiang, L. (2024). LLMCO2: Advancing Accurate Carbon Footprint Prediction for LLM Inferences. arXiv preprint arXiv:2410.02950.

Li, B., Jiang, Y., Gadepally, V., & Tiwari, D. (2024). Toward sustainable genai using generation directives for carbon-friendly large language model inference. arXiv preprint arXiv:2403.12900.