top of page
Nybrila_Logo Final (1).png
Nybrila_Logo Final (3)_edited_edited_edi
Search

The AI Revolution: Ally or Adversary

  • hawktribe3
  • Aug 4
  • 9 min read

ree


Considering the constant bombardment of AI information in the media today regarding its infinite possibilities and impending implementation and aggressive adoption at the behest of salivating tech oligarchs and corner-cutting corporatists, few “real” discussions are being had – at least on the mainstream stage – regarding the thin line that Artificial Intelligence (AI) uniquely straddles in the dichotomy of “Ally or Adversary.”


After all, what makes Artificial Intelligence any different than other tools, services, or products that we currently leverage in our daily lives for the sake of minimized effort, heightened accuracy, expediency, specialized ability, or convenience?


Fundamentally speaking, as well as from the sheer nature of being a “tool,” the items that we place so much dependency on oftentimes possess a dual capability that is inert in and of itself, but in action, is acutely defined by the nature of its user.


For example, it can be argued that a hammer, nail gun, saw, and drill (and perhaps beer), are essential tools for building virtually anything made of wood. However – and conversely – these very tools can also be used to inflict great harm in the wrong hands, and in some cases have been weaponized to do so (yes, even beer).


However, it has never become necessary to place laws and/or regulations around these various tools considering that they are very limited and narrow in scope and functionality, and more importantly, are clearly controlled and delineated by one key activating force and element: The Human.


Even in the Cybersecurity world with its plethora of seemingly unending tools, algorithms, and applications, the difference between a Hacker and a Cracker is subjectively delineated by a subtle human characteristic called “intent” which then gives way to a particular set of actions to achieve a certain objective which then effectively defines the actor as an ally or an adversary; the software as a tool or weapon.


With either tool, whether hardware or software, their ability to move and enact is almost 100% at the behest of its human enabler, even in the rare instance that such tools are incorporated into the machinations of a Rube Goldberg machine, which itself is limited to other rudimentary tools in the way of levers and pulleys behind the force of inertia which ultimately bows to the power of human influence. In fact, a Rube Goldberg structure gone awry can, in most cases, be easily deactivated by an unsophisticated kill-switch of a swift and accurately placed karate chop. Eii-yaa!!!


As a result of this degree of control and intention, laws, regulations, and accountability are placed around the autonomous factor – the human – as opposed to the tool itself given the presumption that it is the human that is the beholder of greater intelligence, ethics, and thus control with the tool serving as its assistant or support provider of sorts, whether we’re talking about a jig saw, an automobile, or a K-9 service dog.


But what of the intelligent tools for which we grant “autonomy” as in the way of Agentic AI; a design that bestows a system with carte blanche decision-making powers, oftentimes over human livelihood in lieu of human control or intervention, and who bears responsibility for its errors and missteps?


What of the tools that operate in a “black box” where the internal workings of the systems or processes are not fully understood or transparent to the user, yet, are granted with the power to determine health, credit, insurance, education, access, or employment worthiness?


Or perhaps more invasively speaking, what of the biometric systems that have surreptitiously become ubiquitous in our daily lives by way of our cellular and IoT devices and even more so by way of Law Enforcement implementations?


Despite the known high-level CER/ERR (Crossover Error Rate/Equal Error Rate) of biometric technology that consistently results in high false-acceptance and/or false-rejection rates, facial recognition technology in particular has proven to be severely erroneous when it comes to women and darker-complected individuals as a result of biased training data and inadequate algorithm tuning.


Furthermore, beyond the common vulnerabilities, risks, and threats that currently permeate the conventional applications, platforms, and software that we’ve become accustomed to, the unique quality and ability of AI invites and accelerates harms in a class of its own.


For example, usage of Generative AI applications that leverage LLM’s (Large Language Models) like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, Grok, and DeepSeek -which currently rank amongst the most popular chatbots utilized - presents increased security and operational risks that should be of primary concern as it relates to deepfakes, hallucinations, data leakage, and adversarial machine learning to name just a few. There are also various privacy risks, business risks, and ethical risks that should be considered as well.


Bearing in mind these challenges, the speed of AI implementations, and potential impacts to individuals, groups, cultures, institutions, society, and the environment, one would think that there would be a concerted and applauded effort to doggedly assure and enforce the protection and best interest of our most precious socio-economic commodity: The Human.

Such a concept of assuring the safety of this commodity is commonly known as ‘Human-centricity,” but what is that exactly?

 

Human-Centricity

 

A simple Google search will return an AI Overview defining “human-centricity” as prioritizing human needs, experiences, and values in the design and development of products, systems, or processes.


The OECD (Organization of Economic Co-operation and Development) is an international organization that is leading the effort and work to promote policies that improve the economic and social well-being of people around the world. As it relates specifically to AI, the OECD defines “human-centricity” as emphasizing the importance of designing, developing, and deploying AI systems in a way that respects human rights, democratic values, and prioritizes human well-being. 


The EU AI Act, a European regulation on Artificial Intelligence and the first comprehensive and perhaps de facto offering by a major regulator anywhere, in Article 1, defines the purpose of its regulation as promoting the “uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environment protection, against the harmful effects of AI systems in the Union and supporting innovation.


EO14091, an executive order signed by President Biden in February 2023 to then be rescinded by President Trump 2 years later, emphasized a human-centric approach to AI development and deployment by requiring federal agencies to ensure that their AI systems do not propagate biases or unfair practices.


As it pertains specifically to AI Governance, the order was set to achieve this directive by promoting the following:

·       Equity and Fairness

·       Review and Reform

·       Transparency and Accountability

·       Public Participation


All things considered, it is easy to see that the common theme in the aforementioned standards and requirements - whether explicitly or implicitly stated - is the need for the implementation and assurance of “trustworthiness” as well as the protection of human “values” and safety in the development and deployment of AI systems.


However, considering that human society is not monolithic by any stretch of the imagination, what is it that we humans fundamentally value and what is our penchant or appetite for trustworthiness? Are we mentally, emotionally, and perhaps spiritually mature enough to allow our needs and activity to be automated, accelerated, and governed by a tool that not only lacks the very thing that makes us human, but quite frankly, lacks the ability to even give a damn.


The answer to those questions alone will determine the kind of tool that AI will largely become in the current society and human ecosystem - whether a weapon or a workhorse; or perhaps a deadly combination of both - and thus, the paradigm of “Ally or Adversary” will largely be determined by the intent and actions of those with the greatest access to the tool and resources to sustain it.


Few reasonable minds would argue that the current state of affairs in the American economy alone necessitates a need for ethical, responsible, and trustworthy governance and enforcement around AI as it relates to protecting intrinsic human values while insulating individuals from foreseeable risks like discrimination, Intellectual Property infringement, disinformation, and mass job displacement just to name a few; some of which are being felt as we speak.


Case in point: Hisayuki Idekoba, the CEO of Recruit Holdings, the parent company for the job search firms Indeed and Glassdoor, in July 2025, announced a layoff of 1300 employees as the company embraces artificial intelligence. Idekoba stated that “AI is changing the world” and that the company must adapt accordingly.


So let me get this right. A company whose business model is premised on helping people find employment is now laying people off to replace them with an AI solution that will make the company more efficient at helping people find employment? In other words, let’s fire people so that we can optimize our capabilities in getting them hired. Would you like some nuts with your cognitive dissonance, Sir?


Perhaps this is merely this company’s novel, short-term way of assuring its pipeline by turning former employees into clients, presuming that the former employees will rely on the services of the company that fired them in the first place.


Either way, from a human-centric standpoint, it can be argued that any company that adopts such a stance as a means of adapting to such a state has clearly expressed what it truly values the most with the human employee clearing ranking at a distant #2 at best.


Even more dark and chilling, during a recent speech in Washington, Sam Altman, OpenAI’s CEO, warned about the plight of an AI-dominated future. He mused of entire categories of jobs being wiped out in various industries as well as the likelihood of AI being used for war, injustice, and various malicious purposes.


One would suppose that the implementation of human-centric regulations would be a no-brainer; championing clear, common-sense initiatives designed to curb and minimize the likelihood of such abuses or misuses. However, some stakeholders, including many of those that supported the rollback of EO14091, argue that AI regulations limit innovation by creating barriers and bureaucratic obstacles for developers and organizations, despite the fact that current AI regulations and recommendations leave considerable wiggle room for innovation in R&D efforts and environments.


Quite frankly, any developer or organization that suggests that the protection of human values, equity, and well-being poses an inconvenience or impedance to innovation is either not as innovative as they claim, or simply wants to operate with impunity from any harms that their “innovations” may cause; or perhaps – and more likely - an incompetent or duplicitous combination of both.


After all, proclaiming that the regulatory protection of human well-being is a hindrance to innovation is like saying that the airbags, brakes, and seatbelts in a racecar are hindrances to improving its speed and performance; an idiotic notion that is quite easy to dispel with little effort.


For example, the Koenigsegg Jesko Absolut racecar is perhaps the fastest and best performing automobile currently known to man. Not only does it effortlessly clock in at a stunning 330+ mph, but for obvious reasons it possesses the aforementioned human-protecting safety devices in addition to many more, not to mention that it is also street-legal (“legal” indicates that it adheres to additional laws and regulations to protect human values). These safety mechanisms are put in place to prioritize and protect human life and interests in lieu of or in lockstep with innovation and performance. Sensible, right?


Therefore, it can be argued that true innovation are advances – technological, methodological, or otherwise – that not only considers, protects, and expands human values and wellness, but prioritizes it over all else.


In fact, well-designed regulations and standards would actually encourage and foster innovation by providing clear guidelines, accountability, and transparency, which then would result in confidence and trust in the technology as well as in the organization that is deploying it.


And if there are two characteristics that we know to be of the most value and importance to any organization or union, those characteristics most certainly would have to be integrity and trust as that one begets the other; intrinsically linked as the bone and sinew of any successful relationship; business or otherwise.


But what does that look like in a time when integrity has eroded into mendacity, trust into sycophancy, and facts into falsehood? Most likely, anything that purports to be an ally will in fact be an adversary, and any tool that is indiscriminately marketed and deployed as a workhorse - with minimal regulatory oversight - will in truth be a weapon.


But here’s the good news: Forewarned is forearmed (you can thank me later; perhaps with that beer that I mentioned if you’re an ally). Tactical knowledge of the pros and cons of any situation and/or environment affords one with the ability to make an informed, consensual decision on what to accept or what not to accept; on how to react or how not to react.


This information allows one to distinguish an ally from an adversary, and by extension, a tool from a weapon. Therefore, one must make a concerted effort to hear what’s not being spoken; to see what’s not being shown, and to protect what’s not being protected.


After all, Artificial Intelligence could be a most wondrous tool for accelerating the best of the human race and environment if placed in the right hands that are led by an intelligent, human-centric mind.


Otherwise, Artificial Intelligence will only beget real ignorance.

 

Lamar Hawkins, CISSP, AIGP, CDPSE, CIAM, CDP, ECES, PCIP

 
 
 

Comments


bottom of page