Here’s What Empowered Leaders Need To Know About Roadblocks Stymying Ethical AI From Succeeding, Which Even Happens In The Noble Push For AI-Based Self-Driving Cars

237
here’s-what-empowered-leaders-need-to-know-about-roadblocks-stymying-ethical-ai-from-succeeding,-which-even-happens-in-the-noble-push-for-ai-based-self-driving-cars

Empowering leadership about AI ethics is a hot trend and crucially needed.

getty A rising catchphrase implores us all to empower leaders and ensure proper leadership for the successful adoption of AI.

The overarching notion is that we want to ensure that leaders of all manner of scope and milieu are aware of and doing the right things when it comes to designing, building, testing, fielding, maintaining, and even simply using the latest in AI. Leaders in all walks of life are either now demonstrably impacted or soon going to be by AI systems, including business for-profit leaders, non-profit leaders, political leaders, high-tech leaders, regulatory leaders, and so on.

A handy report by the World Economic Forum (WEF) recently was released and provides a highly readable and abundantly useful C-suite toolkit for cogently dealing with AI (per the paper “Empowering AI Leadership: AI C-Suite Toolkit” published earlier this year and put together by a litany of luminaries including Kay Firth Butterfield, Mansour AlAnsari, Theodoros Evgeniou, Arunima Sarkar, etc.). I’ll in a moment dive into the findings and share with you some key takeaways. I should perhaps also note that I serve on a WEF committee exploring the reimagining of legal regulations in an AI era, another vital endeavor and dovetails stridently into these expansive AI topics.

To get this discussion underway, let’s start with some contemporary foundations about AI in today’s society.

You undoubtedly are aware that AI is gradually becoming pervasive throughout our daily existence. Initially, the AI was assumed to be justly oriented toward AI For Good. We’ve also subsequently witnessed the ugly underbelly of AI that is replete with untoward biases and dour inequities, known variously as AI For Bad. In the pell-mell rush to get AI out the door and into the hands of often unsuspecting end-users, AI For Bad keeps appearing and seems to be getting worse with each passing day.

How then will the empowerment of AI-related leadership be suitably guided to aim stridently toward AI For Good and steered ardently away from the AI For Bad?

One keystone is via the embracing of ethical precepts and practices underlying the implementation of AI. This highlights the importance of discussing Ethical AI, often referred to as AI ethics, which is a topic I’ve covered extensively such as at the link here and the link here, just to name a few. The focus herein will be on various factors that tend to stymy efforts to incorporate and abide by Ethical AI precepts, especially so in large-scale AI endeavors.

First, an enchanting and possibly true tale about Albert Einstein might aid in setting the stage.

The often-told story is that when Einstein was a toddler he was seemingly silent and had gone disconcertingly and overtly quietly past the expected age at which he was supposed to be able to utter words. You can imagine the daunting concern that consumed his loving parents.

One day, seemingly out of the blue, Einstein spoke up during a normal dinner that customarily took place absent anything other than the sounds of his customary chewing and swallowing. Clearing his throat, he supposedly proclaimed in no uncertain terms that the soup was too hot.

Imagine the unbelievable shock and surprise of this unexpected utterance.

His parents were both startled and elated to have finally heard him speak. They immediately responded by asking him why he had not previously said a word and had waited for this agonizingly long to finally decide to showcase that he can in fact talk.

To which, allegedly, he replied that it was because up until now everything was in working order.

Whether the story is true or not is open to debate. It has been reported so many times that the initial genuineness is mired in murkiness. There is no doubt that the tale resonates with our overall impression of Albert Einstein and reflects a harbored assumption that he, like many by-the-facts physicists and engineers, prefers a norm of precision and exactitude in life (you’ve surely personally encountered this).

We can put the saga to good use.

Pundits have suggested that Ethical AI was ostensibly understated or timidly quiet in the early days of AI and now (finally) seems to have taken on a persistent and prominent place in the AI spotlight. You see, there are at times sniping comments by some that the horse was already let out of the barn and that AI ethics is now belatedly trying to catch up, but the counter view is that society wasn’t ready to listen and take heed of AI ethics until the unethical AI showed its disdainful hands.

Classic chicken-or-the-egg conundrum.

In any case, let’s put that water under the bridge and we ought to be focusing on the here and now. Plus, we need to be acutely focused on and worried about the future. AI is here to stay and aims to be ultimately ubiquitous. Few would argue about that.

The bottom-line lesson from the mouthful story of Einstein is that AI is not in working order and Ethical AI is speaking up accordingly (i.e., the soup has gotten too hot).

Before we get into the meat and potatoes of empowering leadership regarding AI, I’d like to share with you my own devised scale that reflects the awareness of Ethical AI as I have encountered amongst leaders across the entire spectrum of organizations and sizes. In my advising on AI ethics and my research, this scale has proved to be helpful.

We will start by identifying a ranking or rating score that runs from one to five. A score of five is the best or topmost rating, while a score of one is the lowest or least of the scores. Consider this:

1) Entirely unaware of AI ethics (no awareness at all)

2) Marginally aware of Ethical AI and alarmingly so

3) Has awareness but not adequately doing the walk-the-talk of AI ethics

4) Strongly aware and sincerely attempts to apply Ethical AI

5) Fully aware of AI Ethics and nearly airtight implementation

Each of the scores is somewhat self-explanatory. That being said, we can fruitfully take a brief moment to examine each score for some added insight.

A score of 1 is when a leader has no awareness at all about AI ethics.

The topic is not on their radar. They haven’t heard of it. This would almost seem impossible if the leader is somehow otherwise immersed in any AI-related efforts, but do not be astonished to know that sometimes they just didn’t ever get the memo, as it were. The bad news then is that they are likely to blindly mosey along and not ever have any lucid thoughts toward the Ethical AI ramifications of their AI-related projects or uses. The good news is that there is usually a solid chance that once they are made aware of the AI ethics considerations, they are freshly able to be brought up-to-speed and do not particularly have any bad habits associated with Ethical AI that have to be persuasively undone.

A score of 2 involves a leader that believes they are aware of Ethical AI, though they in reality have only a marginal semblance of the matter. This can be a dual-edge sword. They often know enough to be dangerous, namely trying to guide AI efforts in what they believe is an AI ethics direction and yet be sadly off-the-mark. It can be challenging to reposition their indoctrinated Ethical AI ideas. A glimmer of hope is that they might be open to learning more and adjusting beyond what they already thought they knew. That’s the happy face version of a score of 2.

A score of 3 entails a leader that has awareness of Ethical AI and seems well-rounded in the realm. The problem is that they have a gap between turning the theories and concepts of AI ethics into the day-to-day real-world aspects encompassing AI systems development and deployment. They aren’t able to enact the proverbial walk-the-talk and only pretty much talk-the-talk. I’d count this as mainly good news since if shone how to walk, they already know the talk, and usually, things work out reasonably well.

A score of 4 is getting us closer to rarified air, consisting of a leader that strongly knows Ethical AI and has gotten the foundations of walk-the-talk down too. As a statistical facet, there aren’t that many of these leaders. If you have an AI effort and have a leader with a score of 4, hang onto them. The odds are they are in a tremendous demand for their talents.

A score of 5 is a stretch goal for anyone even fully versed in and adept at the practical throes of the undertaking and implementing of Ethical AI. There is nearly always something new to be learned or a new twist that arises. Seeking an ironclad or airtight AI ethics adoption is aspirational more than functionally doable. That being said, it is good to always have a topmost mark on any scale so that you can have a North Star and also feel darned accomplished if you attain it.

I trust that the preceding proffers insights into the scoring.

One almost instinctively blurted-out question that I usually get when presenting the scale is why there isn’t a zero. People like scales that begin with a zero. When a zero is absent, they get curious and at times anxious. I could include a zero and denote that as the devised score for being a leader that is entirely unaware of Ethical AI, but I didn’t want to have confusion on the matter and figured that at least a 1 would be allowed.

So, to placate the zero demanders, I came up with a score of zero that has a different kind of meaning. A zero score is reserved for a leader that has outright disdain for AI ethics. They might or might not know what Ethical AI is all about. It doesn’t matter to them whether they know or not. They insist that the AI ethics “stuff” is a bunch of hogwash and can summarily be disregarded.

For this, I grant them a zero.

After including the newly hatched zero into the scale, I subsequently had to add a minus 1 to the scale too. Here’s why. A leader earns a minus one by actively fighting against Ethical AI aspects. To clarify, the score of zero was a leader that had mere disdain, usually sidestepping any AI ethics considerations. The minus one is a leader that goes overboard and decides that they personally have a vendetta against Ethical AI in all shapes and forms. Yes, a minus 1 type of leader goes widely out of their way to refute, defeat, undermine, and altogether battle against any kind of AI ethics precepts or practical application.

I’ve seen this with my own eyes and thought perchance it was a twisted dream, well, I should say a nightmare for all concerned (nightmarish for their team, their organization, their users, and the like).

All told, here then is my “final” version of the scale:

-1. Bitterly fights against any semblance of AI ethics pursuits

0. Has disdain for Ethical AI

1. Entirely unaware of AI ethics (no awareness at all)

2. Marginally aware of Ethical AI and alarmingly so

3. Awareness but not adequately doing the walk-the-talk of AI ethics

4. Strongly aware and sincerely attempts to apply Ethical AI

5. Fully aware of AI Ethics and nearly airtight implementation

Admittedly, it can be seemingly peculiar to see a scaling of this kind that starts with a minus one. I usually wait to showcase the minus number and the zero until after first discussing the active positive numbers (as per how I’ve introduced the scale to you herein). Some have tongue-in-cheek suggested that the minus one ought to be a minus 100, illustrating dramatically that those leaders fighting against Ethical AI are considerably worse than those only having disdain for it.

Shifting gears, now that we’ve discussed some rudimentary elements about AI ethics and leadership, we can turn our attention to the WEF AI C-Suite Toolkit.

The WEF report provides an instructive indication of how leading with Ethical AI is a strategic imperative, stating distinctly this core premise: “Today’s leaders need a moral compass and tools to help them navigate the complexity of the emerging moral dilemmas posed by powerful technologies like AI. That compass is the ethics of AI, the discipline concerned with applying ethical thinking (what is morally permissible, desirable, and required) to all practical concerns raised by the design, development, implementation, and use of AI. AI ethics is a golden path to realizing the benefits of AI (the Good) and mitigating the risks (the Ugly). It is the great maximizer-in-balance of AI.”

Levering the field of applied ethics, the WEF toolkit for leadership empowerment of AI lists various cornerstone precepts, among those are:

Doing good work that produces good and responsible AI Maximizing benefits and eliminating or minimizing harms Fairly distributing benefits and burdens Understanding the impact and implication of AI systems Challenging the status quo and checking the exercise of power Bringing and including diverse perspectives Navigating dilemmas and trade-offs Etc. The details underlying these principles are aligned with and to some degree shaped upon the European Commission’s Ethics Guidelines for Trustworthy AI and the OECD’s Principles of AI, which I’ve covered at this link here and this link here.

The capabilities of AI have created an alluring sense of high reward for putting in place applications that can do more and perform impressive feats of algorithmic decision making (ADM). As with most factors in life, a high reward comes with high risk. Per the WEF report: “Understanding and managing AI’s potential risks and resolving its related ethical dilemmas while considering the relevant trade-offs are critical not only because they are the right things to do or because regulators will increasingly demand them, but also because business stakeholders – employees, customers, investors – will increasingly expect them. AI risks and ethical missteps can lead not only to regulatory fines but to reputational risks and the loss of revenues and markets. The new generation of customers and citizens is much more socially and environmentally aware and digital- and data-savvy, demanding truly trustworthy organizations throughout the public, private and social sectors.”

The AI C-Suite Toolkit has numerous checklists and useful charts that boil down the essentials in a way that can be readily shared with executives and managers that are used to digesting complex info on a condensed and time-saving basis. Of course, care should be exercised in not taking the summaries and highlights out of context. Nor should the details be placed aside. At times, you can say that successful and ethically sound AI effort can be attributed to paying attention to the proper details. Yes, the devil can be in the details.

An especially favored portion outlines ten substantive steps that leaders should be continually minding as they seek to manage their AI risks and ensure Ethical AI results:

1) Align on AI principles relevant to the business

2) Confirm adequate top-down and end-to-end governance by instantiating stage gates throughout the AI application development and deployment process

3) Design for robustness and safety by incorporating assessment and risk-tiering processes

4) Exercise control and value alignment by enabling ongoing monitoring and adhering to the practices established by the three lines of defense structure

5) Respect privacy by considering what “should” vs “can” be done with data and elaborating on existing data governance practices

6) Be transparent by enabling the traceability, explainability and communication of information decisions and actions of the AI system and the data that feeds the AI system, as well as the visibility into how (and which) broader systems leverage AI

7) Extend security practices to protect against AI-specific risks

8) Enable diversity, non-discrimination and fairness

9) Clarify and engender accountability by implementing the use of impact assessments and external auditing; establish specific requirements for the three lines of defense structure; identify and educate stakeholders in the three lines on the roles and responsibilities

10) Foster societal and environmental wellbeing by considering a broader scope of metrics, like ESG

Okay, given those highlights of the WEF AI C-Suite leadership empowerment report, I’d like to cover some ways in which leaders undercut these kinds of AI ethics precepts.

I’ll focus on shall we say inadvertent or mistaken missteps by leaders. For those leaders that are determined to shove the old Ethical AI under the bus, there is a myriad of both obnoxiously sneaky and even seemingly innocent tricks that can be used for such insidious scheming. We would be here all day covering the litany of those underhanded ploys.

In my experience, the leaders that tend to get tripped up will smackdab land into one or more of these potentially dire pitfalls:

Falsely believes AI ethics is strictly precise and mindlessly mechanical in being fulfilled Becomes overly preoccupied with AI ethics risk-aversion and blind to ethical AI benefits Takes a checkmark “we are done” attitude and says move on now that Ethical AI is over with Assumes that the solution for AI ethics entails writing code amid computational specifications Since I am particularly familiar with having to turn around those school-of-hard-knocks Ethical AI troublemaking infusions, let’s use them to illuminate some tangible examples of how AI efforts can go awry. Again, we are going to assume that the leaders got themselves into AI adverse predicaments by unfortunate happenstance and not by devious purposeful intent. They were doing what they thought or hoped was right and regrettably hit sandbars and snags that they didn’t realize were laying somewhat underneath the shimmering surface of AI rewards but were harboring ghastly AI risks and hazards.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about these AI ethics pitfalls, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Ethics Pitfalls

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and Ethical AI questions entailing the eyebrow-raising AI ethics pitfalls that I’ve shortlisted (there are more, certainly, but we’ll use just the handful, for now, to aid in illustrating the matter).

Recall that these are the particular Ethical AI pitfalls pertaining to leadership missteps that we are going to showcase via AI-based self-driving car endeavors:

Falsely believes AI ethics is strictly precise and mindlessly mechanical in being fulfilled Becomes overly preoccupied with AI ethics risk-aversion and blind to ethical AI benefits Takes a checkmark “we are done” attitude and says move on now that Ethical AI is over with Assumes that the solution for AI ethics entails writing code amid computational specifications Let’s unpack each one.

A company will be described that generally represents many such firms that are in the self-driving space. No person living or dead is specifically being depicted. The example is a composite and merely generally representative. Your mileage may vary in that some, all, or even none of these examples will be encountered in whatever firm you happen to work in or any entity that perchance you become aware of.

With those notable disclaimers, we can start with the first listed pitfall that some AI leaders might get snagged on.

Falsely believes AI ethics is strictly precise and mindlessly mechanical in being fulfilled A modest-sized tech company that is larger than a startup but smaller than a mega-firm is developing an AI-based self-driving car. They are taking a conventional human-driven car and trying to transform the vehicle into an autonomous one. Among various AI projects, a particularly important effort involves adding a suite of sensors such as video cameras, radar, LIDAR, ultrasonic units, thermal imagining devices, etc. to the vehicle and then developing AI that can handle the sensors.

The manager overseeing the AI-powered sensor fusion portion of the self-driving system was already up to their ears with all kinds of software and hardware issues. Sensor fusion consists of taking the computational analyses of each respective sensor and combining the results to arrive at an overall interpretation of the existing driving scene (for my detailed discussion on sensor fusion, see the link here). The digital patterns found in the video camera data had to be married to the digital patterns from the radar data, and so on. It was all very complicated and fraught with lots of AI-intensive challenges.

Without a properly working sensor fusion capability, the self-driving car would be useless. Worse still, if the sensor fusion were to contain faulty hidden programming, the consequences could ultimately be devastating once the autonomous vehicle is utilized on public roadways. The AI driving system might get confounded by the sensor fusion and run into a nearby car, collide with a lamppost, or strike a bicyclist or pedestrian. This is serious stuff.

The overworked manager led a team of AI software engineers, hardware engineers, quality assurance specialists, and a supporting staff. Having attended some quick courses on Ethical AI, the manager knew that as the leader for the sensor fusion there was a chance that ethical issues of the AI could readily materialize. For example, some research studies indicated that AI systems detecting pedestrians had potentially problematically been skewing the detection as based on racial biases and gender biases (see my coverage of this type of AI ethics issue at the link here).

How could the Ethical AI considerations be best handled?

As a trained engineer with a Ph.D. in mechanical engineering, this leader saw the world as one that can be inexorably reduced to formulas and exactness. The directive that the AI should be non-discriminatory and operate in a fair manner could certainly be figured out with some number crunching and the judicious selection of appropriate metrics. This task was assigned to a support staff member. The leader decided that having summarily handed over the delicate matter to a trusted team member, the whole problem of adhering to Ethical AI could be dealt with in an entirely analytical way and nothing else needed to be done. Case closed.

You can perhaps guess what happened next. The staff member was bewildered. After an extensive behind-the-scenes research effort, there didn’t seem to be special analytically sound metrics that applied to this AI setting, and definitely few if any that closely matched to the AI ethics aspects. Trying to turn qualitatively oriented AI ethics precepts into a narrow base of precise quantitative measures was overwhelming and did not seem reasonably feasible. Worried that the leader would be greatly disappointed, the staff member became distraught and frantic at work.

As you will see in a moment, things went from bad to worse. We move to the next pitfall to see further how this played out.

Becomes overly preoccupied with AI ethics risk-aversion and blind to ethical AI benefits The manager overseeing the AI sensor fusion programming had a fixed budget to get the work done. To justify the cost of dealing with the AI ethics-related elements, the manager had identified potential risks associated with Ethical AI lapses in the AI system and pointed out the dollar costs if those risks arose while the autonomous vehicle was later on placed into active use.

In the mind of this leader, the chore of handling the AI ethics considerations was solely about reducing or eliminating risks. This gave an impression to the team that the only concern of dealing with Ethical AI matters was to avoid bad problems and reduce long-term costs. The aura of such leadership “guidance” made the perception of these matters by the team to be seen as an undesirable and dreary inconvenience.

It had not occurred to this leader that the attention to Ethical AI could be a benefit too.

When a top-level corporate officer found out that the team was incorporating AI ethics considerations into this subsystem, the executive was quite elated. This could be touted as an important feature of their AI driving system that other such AI systems did not have. Furthermore, communities that were eyeing using self-driving cars had heard that sometimes AI contains ethically questionable practices. The fact that this firm was intentionally trying to make sure that such dour AI ethical faultiness was being sought out and aimed to be overcome would be a prized means of likely gaining community support for their brand of self-driving cars. It was a big upside of added benefits.

With that revelation, we’ll see what happened next.

Takes a checkmark “we are done” attitude and says move on now that Ethical AI is over with With mounting pressure to get the AI ethics aspects figured out, the manager of the sensor fusion portion decided that an imminently efficient approach could be undertaken. Using some spare time during lunch, this leader put together an Ethical AI checklist and posted the list onto the team’s Slack channel. Each member of the team was assigned one of the items on the list.

They were all given two weeks to get the items respectively done. A checkmark would suffice as evidence that the AI ethics matter had been dealt with. Sure enough, by good fortune, the manager was pleased to see that by the end of the two-week deadline, all the checkmarks had been marked as completed.

Job well done.

It was a huge relief. No need to consume any further thinking about Ethical AI matters. Yes, the effort had chewed up his team for those two weeks, but they could now fully concentrate on getting the real work done.

Can you guess what happened next?

A month later, while doing tryouts of the sensor fusion as part of a closed track run-through, turns out the AI appeared to be falling into an Ethical AI concern. The manager couldn’t blame anyone on the team since it was a topic that had not been listed on the checkmark listing. It was an oversight by the leader. The leader was in great dismay. The AI ethics considerations were already supposed to be done and were presumed as fully completed and long gone.

Having to unexpectedly divert attention to this issue was going to mess up the schedule and put this vital AI effort behind. It would cause an internal ruckus and the leader would be lambasted for poor planning and tainted as having exercised inadequate leadership.

It dawned on the leader that in the future, it might make more sense to anticipate Ethical AI aspects to arise throughout the life cycle of the AI build and fielding. Rather than try to squeeze it all into the upfront portion, the odds were that such matters would happen and require devoted attention from beginning to end. In fact, beyond the end too, including once the AI was being actively used and ongoing maintenance and upkeep were taking place.

We next move to the final part of this AI ethics lessons-learned tale.

Assumes that the solution for AI ethics entails writing code amid computational specifications One of the software engineers on the team could see that the manager was troubled. This software engineer was a hotshot. There wasn’t anything that couldn’t be hacked out and coded. It seemed obvious that this wrangling with the Ethical AI considerations could be entirely placated by writing the right kind of code.

Asking for permission to work on this as a side project, the software engineer eagerly jumped into trying to write code to cope with any and all AI ethics possibilities. At first, code writing was the dominant focus. The programming proceeded to get more harried and enormously complicated. The software engineer took a deep breath. Perhaps the first step ought to have been putting together computational specifications. By doing things that way, the code would be tied to appropriate specs.

After several weeks of doing this on late nights and weekends, the software engineer threw in the towel. The programming was becoming onerous. It wasn’t even covering what needed to be done. Even more exasperating is that a lot of what was required did not lend itself to straight-ahead coding alone.

The leader reflected on this when the software engineer came and essentially admitted defeat. While pondering the situation, the leader recalled a remark that had been helpful in the past. Sometimes, if all that you know how to use is a hammer, everything looks like a nail.

A coding-obsessed mind can be like that.

Conclusion

You can easily extract the AI driving system aspects from the aforementioned leadership pitfalls scenario, doing so to highlight that nearly any large-scale AI system can include the same gotchas. There isn’t anything that makes AI-based self-driving cars unique to these Ethical AI leadership-related qualms.

For those of you designing, building, testing, fielding, maintaining, or just plain old using AI, make sure that you and your fellow leaders are cognizant of the AI ethics underpinnings. I hope you’ll strive for at least a cherished 4 on the rating scale of Ethical AI empowered leadership rating. Going for the esteemed 5 is welcomed too.

On the lower end of the scale, please do not let yourself or other leaders become a zero, nor find yourself in a situation of a minus one, a minus one hundred, or maybe even a minus one thousand. If that does happen, you’ll indubitably need a massive heaping of AI ethics leadership empowerment to right that likely sinking ship.

It would be the ethical thing to do.

HINTERLASSEN SIE EINE ANTWORT

Please enter your comment!
Please enter your name here

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.