Empowering humans in an AI world

Andrew Law of the Open University talks to UNISON about the issues and opportunities posed by the AI revolution

A portrait of Andrew law of the Open University

Over the last five years, AI has announced itself on the world stage. AI can now convincingly write academic essays, scripts and poems, it can create realistic images, and it is posing a threat to jobs which would have been considered untouchable a decade ago.

Andrew Law of the Open University talks to UNISON about the issues and opportunities posed by the AI revolution and how unions have a vital role to play in influencing its development and implementation.

Empowering Humans in an AI world

AI generated illustration of the earth - blue and orange shades. The image is reflected in water in the foreground - it reads 'Empowering Human in an AI world'

Read the article

19 thoughts on “Empowering humans in an AI world

  1. I feel AI may be a threat in the virtual world but not in the real physical world, I feel a more worrying problem is the control of allocated work load through planned PPM platforms of excessive uses by employment companies.

  2. A says:

    The whole concept of A.I needs to be regulated and no doubt will get in the wrong hands and I find that worrying.
    Do the benefits outweigh how we manage to live now, without A.I. I think we should draw a line somewhere, with where this is all going.

  3. Am really not a fun of such stuffs
    So I will learn about it and know more

  4. Robert Smith says:

    I think it’s going too far now, there is no need for this advanced technology to be widely available. There are enough problems with technology miss use as it is. Life is becoming unstable and dangerous in many areas now. Ai has to be tightly regulated or the use of technology will be so unsafe to use it will pass its usefulness completely

  5. Anth Harrison says:

    It’s not the technology that’s the issue but it’s application. Examples: AI being used to scan breast scans is effective. It is quicker and higher detection rates.
    Then we have the more insister situations like AI being used for deep fakes and misinformation. Cheating in essay’s/assessments etc.
    To ethical issues such as jobs being replaced with technology/AI.
    Lot’s of things have the AI label attached but it is in some cases mere marketing, which leads to confusion. Some AI technology has been around for over a decade – a quick rebrand as something new, sells.
    Will jobs be replaced absolutely, but new jobs will emerge. If we upskill and find raw talent which we can invest in people, we can see it as an opportunity.

  6. diane riley says:

    at a base level it can be a great aid to developing new things and ideas, answering queries assisting in planning and it can assist in mundane tasks, when used correctly can help customers get answers quicker and 24/7, this can reduce our need to do repetitive mundane tasks but also would remove many jobs for people who may be unable to do other tasks. Answering tenants queries etc provide a level of learning and insight we would benefit from otherwise all knowledge would be wrapped up in AI and the only thing able to develop skills knowledge and thinking would be AI or scrolling through data sheets at issues raised which doe snot give a true insight to tenants feelings. badly implemented AI for customer queries is very frustrating for the customer trying to find answers, basically if it does not know or cannot assist then after going round in circles on options given still have to wait for a human otherwise get computer says no.

    on another level if it is programmed to rebuild and remodel and improve their own design and coding themselves, if they are programmed to learn and develop ideas from information gathering then the most illogical and damaging thing on the earth are humans how long before they come to that conclusion? not a conspiracy theory just a thought.

  7. Maria Kowalska says:

    There’s a lot of speculation about AI and its capabilities, usually in the negative. Without doubt any system is open to abuse by individuals but that’s not a reason not to implement it. What is important to understand is that AI, like any automated system, is built by humans who have a limited intelligence capacity and understanding and both are flawed as a result. Therefore the limits of AI can be determined with this in mind. Yes it can and will take over many of the menial jobs a lot of people do such as shop workers, clerical and some aspects of secretarial work, but humans will be needed to ensure there is a proper understanding of the situations at hand and implement the correct programme. An example of this is when you scan an item at the checkout and it doesn’t register the item when you put it on the packing shelf.
    However, the positive aspects will be that it will make so many aspects of human life seamless from paying for services, to buying items to ensuring the elderly can remain in their homes safely, ordering shopping, medication when needed, alerting emergency services if someone has a seizure or heart attack. It is positive more often than negative.

  8. Anthony E says:

    Hmmmm, I am More inquisitive after reading through this. Unison should go ahead and grab this opportunity before it’s too late.
    When Bitcoin was introduced, I got the first hand information but ignored it today, I am still juggling with my 12hr shift just to Survive.

  9. Rodney Tough says:

    Robots and AI were written about decades ago by Doctor Isaac Asimov, in stories that took a logical, realistic approach to the subject. Because he’s was also a scientist. He believed that such things were, above all else, tools which we would use in the future that would have many built-in safeguards (the same as vehicles and factory machines) that would prevent them from harming us or our society. He found it absurd that we would create some that would hold itself above us and destroy us. He called it “the Frankenstein complex.” It’s understandable that such things cause concern, but there is nothing harmful about a tool. It is the use people put it to that causes harm.
    The concern should be about people using these things to harm the rest of us. That DOES make sense. They should be used to enhance our lives. Not to replace us!

  10. D Saxton (Data Scientist) says:

    The genie is already a long way out of the bottle, and there is no putting it back…
    and AI is so much like a genie in some respects – because the danger is really embedded in the “questions”, rather than in the resulting answers – although there are also great risks in not understanding the basis on which the answers were created.

    However, because AI is here to stay (no ban could be effective – internationally – with such a powerful/useful technology – which would be similar to trying to ban digitial encryption, used worldwide in so much that society does every second) – and with a technological advance as powerful as AI, humankind would be mad to even try to surpress ‘AI’s proper use for the benefit of “strictly all”.

    The essential thing society must attempt to do as soon as possible, IF trying to regulate or control or guide the use of AI, is to require all devlopers of AI to both openly publish the limitations of their methods with every output (as the caveats) to the users of that AI (to come as a duty higher than commercial interest) and to also abide by a set of data ethics that require (as a minimum) the documenting of the due diligence steps taken to consider whether the AI developed by that organisation/project either is, or could be, misused.

    This misuse could be deliberately (by bad actors – e.g. Deep Fakes for Fake news and unauthorised publication, and many other questionable-at-best uses) or – far, far, far more likely, in fact most likely – accidentally misuse, through a lack of understanding of the tech or its limits, or a lack of foresight about how it could later be misappropriated by unethical users – or, through a lack of consideration of the potential ramifications of the chosen algorithms, because the governance procedures of the development phases had not adequately considered the (always essential) data ethics and unintended bias that is so easy to accidentally introduce, simply by not remembering to consider them in all their depth, and so by not building them all out of the AI’s modelling sufficiently. The danger of bias that can come with AI is a danger to all of us, but is not usually a deliberate conscious action, but rather is usually a consequence of an insufficiency of diverse consideration of the data ethics and potential misuses or of a poorly considered or overly simplistic treatment of the data without recognising the data biases that have not been “built out” or have not be compensated for completely enough.

    There are huge benefits and very large risks too, with AI, as with many past and present technologies, from the automation of looms of the industrial revolution, to the misuse of any tool since.

    Humans are really poor at remembering to adequately consider the impacts and negative possibilities before applying innovation and each AI developed with too little data ethics is another example of this.

    AI is already bringing with it “facts” that will ultimately be used to cut jobs… society is going to find that “speciallist knowledge” based careers will perhaps nearly all become “adapt or get sacked” careers of moving into newly developing posts every few years – it is often quoted that 40% of all jobs will either not exist or be unrecognisable within a decade – even if this is an over estimate of scope, this is still and existential threat to traditional job security which unions will need to really “step up” to help their members meet in a realistic, technology-embracing and forward-looking approach, that will need to holds employers to a moral obligation, embedded in policy, to create meaningful and equivalently paid (or better) widespread staff development programs and new posts (to retain and continously upskill staff, as growth model, particularly those whose posts are at risk of deletion, retraining staff continuously, moving from a 20th century workforce mindset, into a future-proofed efficient and adequately re-developed workforce of the same size doing much more). It is niaive of me to think this will be adopted by all organisations or staff without challenge but as a way forward do organisations want to shrink their workforce to do the same with less staff or develop their workforce to do much more while honouring their staff’s loyalty and commitment to develop.

  11. Ade says:

    I think we’ve let technology and narrow business interests get ahead of us. We (the government, regulatory bodies,…) are in catch-up mode now, being advised by those same interests.

    AI does have its upsides but it is very much a two-edged sword.

  12. D Saxton (again) says:

    And with that in mind society needs (at pace and without commercial interest watering these down) to be developing robust standardised data ethics framework(s) for widespread adoption by both organisations using AI and by AI developers, alike (and with a view to making it a legal requirement to follow an initially national or even eventually an internationally developed framework), that requires the developers to remember their obligations in considering unintended consequences of not “building out” bias, of not understanding limitations and of not building systems that embed AI without suitable human overview/supervision or chatbot systems for service support that use AI but have no “speak to a human” parallel process, for when “computer says no”!

  13. S Kelly says:

    Interesting article. I feel very ignorant of this topic therefore massively under-prepared

  14. Dan says:

    There’s some naivete here from some suggesting that AI will not have implications in the physical world. It already has. Efficiency governs everything, and as more and more businesses and sectors, employ AI to streamline their working systems and practices are many “physical” roles will disappear.

    Some people may suggest that traditional, manual jobs, for example the building trade, hairdressing etc will be unaffected; however, this is a ludicrous suggestion as more and more people will flock to these types of roles and there won’t be jobs for the majority. Those that are employed in those roles will presumably not be able to make as much money either as the supply will far outstrip the demand.

    How many plumbers do we need in our village? How many hairdressers? Human beings weren’t the first species on this planet and they won’t be the last. I could well imagine a future, not long from now where humans don’t exist, and the evolutionary baton has been passed to a new, technological species.

    Or you can bury your head in the sand, go to the Winchester, have a nice cold pint, and wait for all of this to blow over…

  15. Thomas Murray says:

    AI sounds great and am sure it will be if we are all educated and the “bully money people” are not allowed to dictate.

    If the people with lots of money and brains are allowed to progress AI we could be living in a world where AI rules over humans with the money people pulling all the strings (not in my life time but could happen)

  16. Rob says:

    All technology has advantages and disadvantages, some we can anticipate and mitigate for. It is the unknown or unintended that is always problematic. Will AI be abused or weaponised? A seemingly benign technology may be repurposed and then present a threat to human wellbeing, giving advantage to some and not the majority. Clearly, there are concerns about how AI will affect the workplace and employment. However, a properly regulated approach to the development and introduction of AI will reduce those risks and ensure AI is used for the improvement of the human condition. I wonder whether we may be opening Pandora’s Box or taking the next step into a future that will see great advantages – the late Stephen Hawking suggested that AI was the greatest threat facing humans. But even he was not sure whether it would be the best or worst thing that has ever happened to us!

  17. Treven O'Neil says:

    I’ve recently read a book on AI by a professor at Oxford University, who has received a list of awards for his work and who is a supporter of AI and its further development. Frankly, I was horrified, not least because he clearly spells out that the processes much of existing AI is going through are actually not understood by the people who are developing it. It’s a matter of setting it up and setting it running and then it goes in to realms which even its creators due not understand or control. It is well established historically that new technologies destroy livelihoods. To believe that under an economic system where technology is owned by people whose only incentive is their own enrichment that this technology will be used to benefit the larger society is unrealistic. It is also a threat to civil liberties, which should be of concern to any trade unionist or anyone concerned for democracy and free speech. It is already being used in China to monitor the activities and conversations of all the citizens. I am not oppose to new technologies but do firmly believe that the threats they bring to th emajority of us are significant and we need to be aware of them

  18. Steven Rees says:

    As the old saying goes “if it ain’t broken then why fix it?” Many including myself should be concerned. Even the creators of AI are worried, so what does that say? I’m glad my job doesn’t involve technology but it still concerns me as it’s a part of something much larger, that will affect us all in years to come. Enjoy your freedom while it lasts folks!

  19. Charlene says:

    I don’t trust the AI it will cause many issues in the long run . Old school was the safest option . Technology is going to far but not to be trusted .

Leave a Reply

Your email address will not be published. Required fields are marked *