The hottest cloud computing giants announce fair A

2022-08-06
  • Detail

Cloud computing giants announced: Fair artificial intelligence tools

cloud computing giants announced: Fair artificial intelligence tools

22:24 source://

original title: cloud computing giants announced: Fair artificial intelligence tools

untrained artificial intelligence (AI) systems can strengthen bias, so AI systems must receive fair training. Experts say that AI fairness is the data set of each specific machine learning model. This machine can not only have a good effect on the power cord. Artificial intelligence fairness is a new challenge. Large cloud providers are developing and announcing tools to help address AI fairness

facebook announced internal software tool development to search for bias in the training dataset in may2018. Since then, Amazon, Microsoft, Google, and recently IBM have announced open source tools to check for bias and fairness in training models

the following are the design objectives of these tools, their positions with each other, and why IBM's trust and transparency announcements are important

ai fair challenge

ai's core challenge is that the deep learning model is a "black box". For humans alone, understanding how each training data point affects each output classification (reasoning) decision is very difficult - and often impossible. The term "opaque" is also used to describe this hidden classification behavior. It's hard to trust a system when you don't understand how it makes decisions

in the machine learning developer community, the opposite of opaque is "transparency". Transparent deep learning model will reveal its classification process in an understandable way, but the research on creating transparent model is still in its early stage

in January 2018, a large number of Chinese organizations participated in the standardization of artificial intelligence and should replace the white paper. The white paper recognizes the ethical problems in AI and has not yet provided remedial measures, And said:

"we should also be alert to artificial intelligence systems to make ethically biased decisions. For example, if universities use machine learning algorithms to evaluate admissions, and the historical admission data used for training (intentionally or unintentionally) reflect previous admission procedures (such as gender discrimination) Machine learning may exacerbate these biases during repeated computations, Create a vicious circle. If not corrected, there will be prejudice in society in this way. "

contributors to the white paper include: Alibaba cloud, Baidu, Chinatelecom, Huawei, IBM (China), Intel (China) , Tencent, etc. I believe these organizations are also trying to solve the problem of prejudice and discrimination in the training AI system, but they have not publicly announced the tools

fair status of artificial intelligence

facebook's

facebook only identified one of its internal anti bias software tools by name in its announcement in may2018. "Fair process" measures how the model interacts with a specific population. The Facebook team worked with several schools and institutes to develop its tools. Facebook has not publicly released its fairness flow tool

Amazon

aws published a blog in july2018, which built machine learning fairness in terms of accuracy, false positive rate and false positive rate. However, AWS has not released a developer tool to evaluate the fairness of other aspects of model training

Microsoft

microsoft research published a paper in july2018, describing the fair algorithm of binary classification system and the open source Python library to implement the algorithm. Microsoft's work includes pre-processing training data and post-processing model output prediction. However, it is not implemented as an advanced developer tool; It is suitable for Python developers who are familiar with the deep learning code and control the part by equipping with buttons of different colors

Google

in September 2018, Google's people + AI research (pair) plan went further than just providing a developer library, announcing its "what if tool". What if analysis enables developers to visually analyze input data sets and the tensorflow model of training and include fairness evaluation. Google's what if analysis tool is now part of its open source tensorboard web application

ibm

a week after Google's what if announcement, IBM gave Google a place by announcing a visual developer tool that can be used with any machine learning model. The IBM branded AI openscale tool enables developers to analyze any machine learning model using any integrated development environment (IDE). IBM has also opened its machine learning fairness tool as the AI fairness 360 toolkit. IBM containerizes its machine learning tool chain using kubernetes orchestration and can run in any public cloud (as expected, its IBM openscale tutorial runs in Watson studio on IBM cloud)

fair transparency and open source of artificial intelligence

finally, the best answer to solve the deviation in a well-trained machine learning model will be to build a transparent model. But because we don't know how to do this, today's deep learning model is a black box. Therefore, the bias and fairness evaluation tool must check the input data set and output inference results of each model. I believe that more tools will follow this path

currently, IBM's open source AI fairness toolkit sets a good example by using any model type on any public cloud

source: many domestic enterprises have developed their own technologies and realized scope production - intelligence can strengthen bias cloud giants announce tools for AI fairness/d21

compiled by: Qian Xinyao

Copyright © 2011 JIN SHI