An important part of being rational is knowing one's goal, why one has it, taking actions maximizing the probability of achieving it, and being able to tell why one takes certain actions over the other. Our success in staying in control of AI clearly depends on how well we as a whole, and well as each individual is able to control why one does what one does, and decide what one wants to do.
If you don't know why the search results are ranked the way they are, you should worry, because this ranking is part of your daily decisions.
If you can't know why a specific trading bot made a decision to sell stocks, resulting in thousands changing their jobs, you should be concerned, too.
If you can't know why a task management system at your work is recommending you to do task X that you do, you are out of control.
The AIs are already largely in control of corporate world, both at micro and macro levels, replacing middle managers, so probably:
- AIs used for non-trivial human decisions should all be transparent and make sense: rankings, recommendations, trading, etc.
Tomorrow most people might live by working on virtual tasks in augmented reality, paid by businesses for works both virtual and real. People might live on basic income, become free to play most of the time. The majority of people may start putting on smart glasses the moment they wake up in the morning, and spending their day time sculpting new objects and writing programs for the augmented reality world. The data overlay to real world may become the new canvas. The traditional programming with keyboard and mouse may be replaced by precise typing with silent mouthpiece and hand gestures.
While free to explore most of the time, skilled people may tend to sign smart contracts, and have high penalties of not doing emergency tasks they are proficient at. There may also be popular incentivizing contracts that create smart bonuses for taking virtual courses to learn skills in demand.
Most people may come to love it, as they can freely meet people, live and travel almost for free in sacrifice of their privacy, not having full access to the data about their histories, their recorded experiences, full data about their search queries, etc. They might not know why and how the answers of their intelligent assistant are generated, and how the virtual tasks matched with their interests and location. In fact, most of them may not really care about it, since most of the time they would do tasks they like, by searching for tasks much like they used to search for videos to watch. They might enjoy the company of the constantly found new friends and game partners, and not lack the sense of meaning, because such systems could give them a story of why they live, a narrative, and an imaginary destination, with hopes to live longer, or perhaps even forever, which may motivate most of them to solve the problems in biotechnology, nanotechnology and other. While this is not necessarily bad, it is a concern that:
Most of this technology tomorrow may be closed, and the competitor companies like Google, Microsoft, Tencent may try to keep their recommendation and matching algorithms based on very private information of people's lives, a secret and a strategic competitive advantage.
Most people may not really know what these specific tasks belong to, for example, who are the patients whose telesurgeries they perform, and what ultimate goals do some micro-tasks actually serve.
It is not unfathomable that to moderate all this, there may be a small number of people who program the algorithms to distribute business problems to augmented realities for the vast majority of people serving interests defined by these large companies and businesses paying them to place problems.
What Could Be Done
There are many things that could be done to ensure that people stay in control of AI. One of them is passing transparency laws. If a search engine is part of daily decisions of most people, people should be able to investigate theoretical rationale, as well as source code explaining how it works, where and how the data is being stored, etc., or if a proprietary social network can decide the outcome of an election, the recommendation, ranking, and content injection algorithms should also be open.
- The law at international level could be passed, requiring the theoretical rationale, the designs and software of AI systems be opened, if they are used as the basis for non-trivial human decisions, opening the access to the processes of defining their objective functions and optimization algorithms, as well as sources of data (with possibility to request access from the data sources) that are and were used to train the weights and set rules in decision models.
While such law might be against the incentives for corporations to profit from innovative algorithms, it may prove critical in making AI decisions transparent, putting humans at large in control of the AIs, not the other way around.
It may be relevant today, not tomorrow, because, as mentioned above, we already have AIs deciding in markets and workplaces.
Note: although decisions of large corporations are largely transparent to the intelligence agencies, arguably, staying in control of AIs may need the transparency at individual level, for each of us individuals to know why one does what one does, and decide what one wants to do, and if it makes sense in the grand scheme of things.
Originally posted on https://wiki.mindey.com/topic/ai/DecisionTransparency.html