Egypt’s achievement of WHO’s ML3 contributes to attaining SDGs
Egypt has achieved a significant milestone in medicines regulation, attaining maturity level 3 (ML3) in ...
The UK government allocated £100 million for a new Sheffield University research to build principles for the responsible use of AI across the public, private and third sectors.
The Secretary of State for Science, Innovation and Technology, Michelle Donelan has announced funding for new research that will deliver next-generation innovations and insights into the use of artificial intelligence (AI) and underline the UK’s commitment to maintaining a leadership position in AI research and its ethical deployment.
The University of Sheffield will host two projects that will define what responsible AI use is across public and cultural sectors.
Supported with funding from the Arts and Humanities Research Council (AHRC) through the Bridging Responsible AI Divides (BRAID) program, these projects will produce early-stage research and recommendations to inform future work in this area. They illustrate how the UK is at the forefront of defining responsible AI and exploring how it can be embedded across key sectors.
Dr Joanna Tidy will lead a team based in the University of Sheffield’s Department of Politics and International Relations to investigate the responsible use of AI in the museum and heritage sector, specifically in relation to biases in AI which stem from the colonial history of museum collections. The project is in partnership with the Royal Armouries, the UK’s national museum of arms and armour.
Dr Tidy said “Museums and heritage institutions are increasingly using AI tools such as Machine Learning, Natural Language Processing, and Machine Vision to enhance visitor interaction with their collections.
“However, a well-recognized problem with AI is bias, including how AI algorithms reproduce skewed underlying data. For museums and heritage institutions, a challenge for responsible AI use lies in how underlying biases in museum collections, such as those rooted in colonial origins and histories, are reproduced through AI data processing and outputs.”
“It is a crucial time to be defining what the responsible use of AI can and should look like for different settings, and we need to work across academic boundaries and engage with a wide range of applied expertise to explore ways forward.”
Dr. Denis Newman-Griffis will lead a second team from the University of Sheffield’s Information School and Department of Philosophy to work with organizations across public, private, and third sectors to build shared learning, values and principles for responsible AI, enabling best practice development, helping to organize information and supporting decision making.
Partners on this project include the British Library, Sheffield City Council, the multinational data science consultancy firm Eviden, and the Open Data Institute through the Data as Culture program.
Dr Newman-Griffis said “This project will help us learn what ‘responsible artificial intelligence’ really means for teams and organizations dealing with the changing AI landscape today.
“Whether it is in helping to organize and share research and heritage materials, informing data-driven policymaking in local government, or mining troves of data for business insight, using AI responsibly needs a clear understanding of who is involved and what matters to them around AI use. Our research will help map out and new directions for making responsible AI a living, breathing practice and lay the groundwork for other organizations to build their own policies on AI much more easily.”
Prof Laurence Brooks, from the University of Sheffield’s Information School, said “AI has the potential to be of transformational benefit to the world, and already exists within so many aspects of our lives. But, as with other digital technologies, it also has the potential for ignorant and unfair use. The difference is the choices we make, what we call responsible AI. This project aims to help organizations develop an understanding of responsible AI and contribute to a better world.”
Dr Susan Oman, from the University of Sheffield’s Information School, said “We’ve a great team of colleagues and partners with a keen interest in the burgeoning responsible AI space. We’ll be working with an artist, as well as these organizations to intervene in questions of what and who responsible AI is for, and how it works in practice.”
Professor Christopher Smith, Executive Chair of the Arts and Humanities Research Council and UKRI International Champion said “The impact of AI can already be felt in many areas of our lives. It will transform our jobs and livelihoods, and impact on areas as diverse as education, policing and the creative industries, and much more besides. UKRI’s research will be at the heart of understanding this new world.
“The research which AHRC announced today will provide lasting contributions to the definition and practice of responsible AI, informing the practice and tools that are crucial to ensure this transformative technology provides benefits for all of society.”
Egypt has achieved a significant milestone in medicines regulation, attaining maturity level 3 (ML3) in ...
A total of 157,000 beneficiaries – representing over 38,000 households – got a boon of ...
The United Kingdom and Australia have contributes about $ 22 million to support the efforts ...
اترك تعليقا