Knowledge Mining – Fundamentals of Artificial Intelligence

Knowledge Mining

“Knowledge mining” is the process of creating a searchable store of information. This word is also used to talk about ways to get information out of a lot of unstructured data.

Azure Cognitive Search is one of these knowledge mining solutions. It is a private enterprise search solution that includes tools for building indexes. The indexes can then be used for internal purposes only or to make content searchable on Internet assets that the public can see.

When it comes to performing knowledge mining on documents, Azure Cognitive Search is able to make use of the built-in AI capabilities of Azure Cognitive Services. Some of these skills are image processing, content extraction, and processing of natural language. The product’s artificial intelligence lets it index documents that couldn’t be searched before and quickly pull insights from large amounts of data and show them on the surface.

Principles of Responsible AI

Microsoft uses a set of six rules to make decisions about how to build software with artificial intelligence. These rules are meant to make sure that AI applications give great answers to hard problems and don’t have any unintended bad effects.

The following are the six principles of responsible AI:

  • Fairness
  • Reliability and safety
  • Privacy and security
  • Inclusiveness
  • Transparency
  • Accountability

Now, let us understand these principles in some detail.

Fairness

All people ought to be treated equally by AI systems. Take, for example, the case where you create a machine learning model to help a bank’s loan approval application. Without favoritism, the model should decide if the loan application should be accepted or turned down. This bias could be based on gender, race, or anything else that leads to unfair advantages or disadvantages for certain groups of applicants.

Azure Machine Learning gives you the ability to understand models and measure how much each part of the data affects the model’s prediction. Data scientists and developers can use this feature to figure out where the model is biased and take steps to fix it.

Microsoft’s implementation of responsible AI with the Face service is another example. This service used to have facial recognition features that could be used to try to figure out how someone is feeling and what their name is. These capabilities can be used to identify individuals. If these features are used in the wrong way, people may be stereotyped, treated unfairly, or have services taken away from them.

Leave a Reply

Your email address will not be published. Required fields are marked *