Overview
Recommendations
Next Steps
Social media
Does the future of HR belong to AI?
The corporate world is currently haunted by uncertainty: In what ways will AI change our work? Who will lose their job, and does the manner in which we put digital tools to use really benefit the people who rely on them? Enter Corporate Digital Responsibility (CDR), also known as digital humanism.
Digital expert Martin Giesswein and Barbara Stöttinger, Dean of the WU Executive Academy, have taken a closer look at Human Resource Management (HRM) to figure out the specific applications and ethical deliberations we need to consider when using artificial intelligence in HRM.
“Let’s start with our conclusions,” Martin Giesswein suggests. Managers would be well advised to heed the following four principles in selecting and using digital tools within their companies:
These four principles are derived from the cosmos of digital humanism, which gives us answers regarding our concerns and hopes revolving around today’s technical possibilities. The term is easily explained: a digital technology is not used for its own sake but only for its direct benefit for employees and customers; only if there are no ethical repercussions for anyone; and only if it can be reconciled with a company’s climate action goals.
There is a myriad of synonyms for the term “digital humanism.” The investment firm Goodshares coined the phrase People, Planet, Profit for it. The German Association for the Digital Economy (BVDW) calls it CDR, as a logical continuation of the term Corporate Social Responsibility (CSR). Some companies have already threaded their own digital principles into their overall strategy while others plan on adding a “D” to their ESG initiatives, turning them into Environmental Social Digital Governance.
Barbara Stöttinger
Whatever we call it, the bottom line is that this technology is here to serve humans and support them in their work – and not the other way round.
As Alfred Mahringer, Senior Director of Human Resources at A1 Telekom Austria AG, reports, his employees can check their own skills against the market requirements based on a self-evaluation and match them with internal roles and job offers. The AI employed by People Analytix (https://people-analytix.com), for example, provides information on what job profiles best match one’s own competence portfolio and which skills still need to be acquired for a certain role. This way, employees can decide themselves if and how they wish to make use of the in-house learning offer to keep abreast of the newest developments in their field. In this case, AI acts in the best sense of digital humanism as it doesn’t just spit out a centrally coordinated analysis ordered by the HR department. “We don’t even get to see the data unless the employee looking to upskill decides to share the results with HR or their supervisor in order to create an individual learning path or be considered for new roles,” Mahringer points out.
“These past weeks, I’ve been testing AI systems that promise to greatly help along the knowledge transfer of soon-to-retire employees in their organizations,” Giesswein recounts, illustrating its potential with a practical example:
The new reality: “Applicants send AI-generated CVs and cover letters. At the HR department, no human sets eyes on these documents – once again, it is AI that decides which applicants get to the next round. I can certainly understand that companies need to rely on AI simply to deal with the myriad of CVs they receive, and that it might also lead to a reduction of human bias, but it still makes me wonder whether everything about it is helpful,” says Barbara Stöttinger, adding: “With the help of technology, maybe we will manage to find a completely different way.” Here’s one possible scenario for the future: applicants generate a 30-second video presenting themselves. A watermark and a government-secured identification (eiD) make sure that the video has not been generated by AI or a third person. These 30 seconds let the recruiter gain a first impression of the person applying for a job. “That’s definitely something a person is more apt to do than a machine,” Stöttinger is convinced. The supporting AI confirms that the data shared by the applicant match the job description. As a last step, there is a personal meeting (online or offline) as part of the classic recruitment process.
There are many questions that the HR department is asked over and over: Where will I get my reference? How does time recording work? Can a day off be turned into sick leave in case I get ill? More and more companies already rely on HR chatbots that recognize and respond to recurring standard questions based on natural language processing. The high-performing WienBot, for example, answers citizens’ questions on the city’s services, such as the opening hours of public pools, which documents are necessary to renew one’s passport, or how to get help with applying for energy cost subsidies. “The quality of these chatbots currently surpasses that of AI systems because the answers are either generated by humans and saved in the system or derived from reliable sources (the website and databases of the City of Vienna). AI, on the other hand, generates a new answer for each query, which comes with the risk of so-called hallucinations, i.e., a factually wrong but, at the same time, deceptively realistic-sounding answer,” Martin Giesswein explains. Sindre Wimberger and his team at the City of Vienna are currently exploring the further development of first-level support with the help of a prototype.
Martin Giesswein
In the future, it can be expected that the quality of AI systems will increase, making it possible to leave the tedious answering of questions to them.
Text, audio, video, or image: the state-of-the-art services provided by Midjourney, OpenAI, or D-ID from Israel (https://www.d-id.com) are revolutionizing the way online learning is organized and implemented. Also without any programming skills, HR managers will be able to produce their own digital learning contents in a cost-efficient way and adapt them in any way they wish.
In the coming years, a massive push in the development of the services supplied by traditional online learning platforms and content agencies can be expected. “As a lecturer on topics regarding the digital economy, I won’t have to present my input ‘live’ in front of 25 people in an actual room in five years’ time. Instead, interested participants will be able to choose whether they want to experience my lecture on site or access my contents interactively and online through my avatar at any time – a clear improvement compared to the ‘static’ videos and podcasts available today,” Giesswein believes.
“AI-based HR start-ups offering practical solutions, for example regarding employer branding or recruiting, are currently a dime a dozen,” says personnel expert Martina Ernst, who has co-developed the new People & Culture Management program of the WU Executive Academy. “In one of our most recent modules, the lecturer Melisa Gibovic-Danner, Head of People & Culture Strategy at Boehringer Ingelheim, showed us a shortlist put together from about 50 new and exciting apps, all on the topic of employer branding and recruiting.”
One of them is www.myVeeta.com, which helps companies stay in touch with suitable candidates and alumni in a simple and professional way. Another one is www.firstbird.com, a software that helps companies sign on highly qualified employees based on the recommendations of current employees.
“The wonderful thing about these apps is that they support SAAS and can therefore be used by the People & Culture executives quite effortlessly as they can easily be connected to the often still very rigid HR architecture used in the companies,” Ernst lauds.
Martina Ernst
For HR executives, agility as a part of selected new work competencies will, in the future, be an absolute must-have. It’s the only way to design a flexible e-HRM architecture that will be capable of reacting to new trends at all times through the respective software solutions (SAAS).
“But the systems we use in HR are provided by our IT department, so there’s really not much we can do.” “Does that sound like something an employee of your company might say? In the framework of digital humanism, new standards regarding the configuration and creation of software systems are evolving. IT departments, HR, and representatives of future users come together to plan the systems, making sure they live up to the ethical requirements of all stakeholders,” Barbara Stöttinger explains.
In the course of this process, so-called ethical value requirements are defined, which play a part in deciding the functions of the future systems. For all this, there is also a standard and a certification (ISO/IEC/IEEE 7000 https://standards.ieee.org/ieee/7000/6781). Sarah Spiekermann-Hoff from the Vienna University of Economics and Business has co-developed this standard since 2016 in her position as vice-chair. It is already being used by IT companies, the UN, and the City of Vienna. The motivation of companies to rely on this standard varies, ranging from the heartfelt ethical responsibility of managers or making provisions for later media-related or legal issues to increasing the acceptance and use of a software among the company’s workforce.
Applicants who are in a position to decide between two companies increasingly inquire after its true corporate culture. Kununu and other services are trying to build on this interest. Getting certified or receiving an award for being a company that upholds the principles of digital humanism could attract more applicants to one’s company. “Just as is the case with the ecological responsibility of a company, there can be no greenwashing. Pretending to use digital tools with the best interest of people in mind can be easily unmasked as ‘humanism washing’ and have a negative impact on a company,” Giesswein cautions.
These days, the image of a company is strongly determined by journalists’ reporting, its own online presence, and Kununu. “Try asking ChatGPT whether your company might be a good fit for someone seeking to fill a certain position. The AI is obviously trained to reply that it can’t answer this definitively but ends up spitting out about 400 words about the company (in our case the WU Executive Academy) anyway,” Barbara Stöttinger reports.
The question for HR managers is thus clear: how can we guarantee in the future that when AI is trained, it will be fed the right information about our companies? Not even the technical aspects of this question have been answered so far. Nevertheless, it is important to start dealing with this issue immediately to reduce the risk of untrue or biased AI-generated information about our companies.
Exciting times for HR managers ...
Join 15,000 + professionals and get regular updates on leadership and management topics. Learn something new every time.