SIGN UP to updates
Sign up for Email Updates

Receive the latest news & information from NEC Software Solutions

Using AI in housing responsibly

Its rapid growth may have alarmed many, but artificial intelligence is here to stay. That means the use of AI in housing is likely to increase, but how can it be done responsibly and to what end? NEC’s director of housing solutions, Trevor Hampton, weighs up the pros and cons and calls for sector-wide oversight as the tech rolls out.

AI in housing needs careful management

However you feel about it, AI is here to stay. In just a short space of time, it’s jumped from science fiction to essential business tool capable of driving efficiencies and boosting personalisation. Inevitably, this rapid rise has intensified debates around ethical use.

These debates are particularly relevant for social housing, an area that’s seeing soaring levels of vulnerability. Applying AI here requires very careful management to keep it aligned with housing providers’ core values. So how can AI be introduced responsibly?

Introducing AI in a socially-responsible way

Fear and anxiety has always surrounded new technology, and the pace of AI’s rise has been particularly fast. And while it has the potential to save lives, cut costs and improve tenants’ wellbeing, it could also have a negative impact if its intended purpose is not fully thought through. The point is that the benefits must outweigh the potential risks.

That’s why some housing services are just not suitable for AI. Waiting lists or housing-need allocations are good examples, because the potential harm to tenants is too great. What if bad training data or a lack of testing means the AI recommendation on who has the greatest need is wrong?

But if AI in housing was to spot tenants at risk of falling into debt or help prevent damp and mould, then it can be of huge benefit with negligible risk. Here, inaccuracy would create only a small inconvenience, such as an unnecessary phone call or visit.

How you train AI matters

One criticism levelled at AI is its potential to discriminate. Undoubtedly, the way we train AI matters. We need to consider accountability, fairness and transparency hand-in-hand with development.

If flawed data is used to train the AI algorithms then it can lead to inaccuracies. Making the right decision about what to include and exclude is vital. For example, if factors such as age and gender have no bearing on the problem, then don’t include them; doing so risks an unconsciously biased or flawed outcome.

Using AI to find damp and mould

We’ve worked with several large housing providers to use AI to predict the likelihood of damp and mould in their properties. To train the algorithm to make more accurate predictions, we combined tenant and asset data to give a more rounded picture, enabling the housing providers to better identify properties according to risk.

To mitigate against bias, the data was rigorously screened during development and checked against results that are known to be positive indicators for the likelihood of damp and mould. It’s then further screened by the housing providers themselves to verify that the data used is good quality and has been interpreted correctly.

Critically, if the AI’s recommendation is that a particular property is vulnerable to damp and mould, it’s the housing officer who makes the final decision on whether to inspect.

Developing the right process and practice

The regulatory framework around AI is continuing to evolve in the UK, the EU and worldwide. In particular, bias testing – exploring how AI makes the recommendations and what the outcomes are – will be key to deploying AI safely and ethically.

A sector-wide approach to best practice could ensure AI works to improve tenants’ lives while staying within ethical boundaries. That’s why we recommend that housing providers considering AI aim for sign-off at a higher level than a traditional tech investment. Setting up oversight boards that report to executive committees, for example, would help to ensure that data inputs can’t introduce bias, that the level of testing is appropriate, and that AI continues to deliver pre-defined tenant benefits.

If applied ethically, AI can make a significant contribution to meeting social need. By adopting a sector-wide approach, IT suppliers and housing providers can work together set the benchmark, ensuring that AI systems are inclusive, responsible and put tenants’ wellbeing and safety first.