1. Introduction: The Two Economies of Artificial Intelligence
The global artificial intelligence ecosystem stands at a definitive historical precipice as the calendar turns to 2026. For the past three years, the market has been dominated by a singular, overwhelming narrative: the frantic, capital-intensive construction of the physical and digital infrastructure required to birth machine intelligence. This period, characterized by the breathless accumulation of graphics processing units (GPUs), the groundbreaking of gigawatt-scale data centers, and the training of ever-larger foundation models, has generated trillions of dollars in paper wealth and fundamentally reshaped the capital expenditure profiles of the world’s largest corporations. However, a nuanced analysis of market dynamics, historical precedent, and emerging economic data suggests that this initial phase—the “Installation Phase”—is rapidly approaching its saturation point. We are witnessing a decoupling, a bifurcation of the AI economy into two distinct trajectories with inversely correlated fortunes: a saturating infrastructure layer facing deflationary pressures and margin compression, and a nascent application layer poised for a “Golden Age” of value creation.read more
In 2012, I founded Nyopoly.com to revolutionize retail pricing with a "Customer Engaged Pricing" model, addressing a $2 trillion inefficiency. Drawing on my background in retail and technology, we aimed to maximize profits through tailored customer negotiations. Lessons learned include the importance of user psychology, data challenges, and clarifying product identity amidst complex business strategies.
We are living through a massive experiment in information dynamics, and I’m starting to worry about the results.
Initially, Large Language Models (LLMs) were trained on the “wild” internet—a chaotic, messy, and deeply human repository of text. It was a library written by people. But today, the internet is fundamentally changing. More and more of the content we consume (and that future models will train on) is heavily influenced, if not completely generated, by AI.read more
Here is a brief excerpt suitable for placing early in the piece, perhaps following the opening vignette:
---
In an era once defined by our mastery over tools, the line between operator and operated is beginning to blur. Artificial intelligence, once passive and programmable, is taking initiative—setting agendas, directing workflows, even determining which human actions are valuable. As AI gains agency, the human role risks inversion. We are no longer just the users of systems, but increasingly the used—our behaviors captured, our data extracted, our choices shaped to serve machine-driven objectives. The question is no longer whether AI can think, but whether we can still choose freely in a world increasingly run by those who never sleep, never forget, and never ask why.
Today’s Tuesday Reading is by Shawn Harris, MOR Associates Executive Coach. Shawn may be reached at sharris@morassociates.com or via LinkedIn.
In most MOR programs, in the first workshop, on the first day, we support participants’ self-awareness in how they spend their precious resource of time. We do this through a framework that inventories everything we do into three categories: Leading, Managing, and Doing. As artificial intelligence comes at us all at full speed, we wonder how AI might impact the evolving leader and our Leading, Managing, and Doing.
AI’s Impact on the Repeatable Tasks of Managing and Doing
Warren Bennis described the difference between managing and leading as “a manager does things right, and leaders do the right thing.” AI’s capability in automating routine tasks not only transforms the ‘Doing’ in our framework but also elevates ‘Managing’ from more repetitive duties, allowing leaders more space for ‘Leading’ and the strategic realm—envisioning the future and setting directions. The good news is that we can now widen our resources to delegate to. By delegating repetitive and data-intensive tasks to AI, we unlock the capacity for higher-level work, creativity, and strategic thinking.
As this evolution unfolds, organizational structures are likely to become flatter. With fewer layers of management, there will be more of a need for employees to lead from where they are. Leaders can make decisions quicker in response to market changes. Organizations will favor agile, cross-functional teams with the flexibility to adapt continuously. Nimble collaboration between humans and AI systems will become a competitive advantage. Communication and emotional intelligence will gain importance as coordinating large teams without hierarchy becomes critical. Leaders will need skills to create alignment and inspire people in this environment.
Increased People Priorities
Generative artificial intelligence (GenAI), AI capable of generating new content, will change hiring priorities. Demand will grow for talent skilled at building AI systems and integrating them into business processes. Pew Research found that jobs highly exposed to AI tend to require more analytical skills like critical thinking, mathematics, and complex problem-solving.
In this era, the essence of ‘Managing’ extends beyond traditional boundaries, as leaders prioritize change management, guiding and preparing their teams for a future interwoven with AI, incorporating the ‘Doing’ through continuous learning and the ‘Leading’ through visionary workforce development. They must communicate a compelling vision for human-AI collaboration that alleviates fears of job loss. With technology transforming work, leaders should champion continuous learning and development. Those who prepare their people will build durable talent pipelines.
The future of work is intrinsically linked to our ability to prepare our workforce for the new realities of an AI-driven world. This entails technical training and fostering a culture of adaptability, lifelong learning, and ethical reasoning. Leaders must champion initiatives that equip employees with the skills to thrive alongside AI, ensuring our organizations remain competitive and innovative.
Moreover, as we navigate the ethical terrain of AI integration, we must be vigilant in addressing issues such as data privacy, algorithmic bias, and the societal impact of automation. Ethical leadership in the age of AI demands a commitment to transparency, accountability, and fairness, ensuring that our AI initiatives are aligned with the greater good.
Strategic Thinking in the AI and GenAI Era
GenAI has the potential to automate specific analytical and data-processing tasks typically done by knowledge workers. According to Pew Research, 19% of American workers in 2022 were in jobs with activities highly susceptible to automation by AI. By delegating the ‘Doing’—the analytical legwork—to AI, leaders can invest more in ‘Managing’ through insightful interpretation and ‘Leading’ by crafting visionary, long-term strategies that navigate the AI-infused landscape.
With AI handling rote analytical work, leaders will need stronger abilities in systems thinking, seeing connections, and envisioning future scenarios. Strategic planning will become even more important as technological change accelerates. Leaders must regularly re-evaluate how new AI capabilities can be integrated into operations and strategy.
Concluding our exploration, it’s clear that AI doesn’t just change the way we lead, manage, and do; it amplifies our capacity to excel in these roles. The imperative for leaders now is to embrace AI, blending its capabilities with our human strengths. As leaders, we are called upon not just to adapt to this evolving terrain but to actively shape it. Our challenge, and indeed our opportunity, lies in redefining what it means to lead, manage, and do in an environment where AI not only supports but also enhances our human efforts. The call to action for you is to embrace this shift proactively: assess and realign how you lead with an eye towards innovation, manage with strategic intent, and execute with a blend of human creativity and AI efficiency. In doing so, we not only navigate the present but also lay the groundwork for a future where AI catalyzes growth, innovation, and enhanced human collaboration. As we stand on the brink of this new era, let us commit to leading the charge, harnessing the full potential of AI to elevate our organizations and, ultimately, society at large. Originally posted on 2/27/2024 to MOR Associates’ Tuesday Readings: https://morassociates.com/insight/wordpressmorassociates-com/leading-managing-doing-and-ai/
We’ve all seen firsthand how technological advancements can drive profound changes in our economy. In this era of digital transformation, one of the most promising developments in artificial intelligence (AI) is the rise of foundational models, like GPT-4 from OpenAI. These large-scale machine learning models, trained on diverse Internet text, show an unprecedented versatility and are now being considered as foundational building blocks for a range of AI applications. In this blog post I will present 5 reasons why foundational models represent the future of AI.
Flexibility and Generalization
One of the key reasons foundational models are the future of AI is their inherent flexibility and generalization ability. Unlike narrow AI models that are trained for a specific task, foundational models can be fine-tuned to perform a variety of tasks, from translation and summarization to coding assistance and content generation. This flexibility means that businesses can use a single foundational model to power a wide array of applications, reducing the need for multiple specialized models.
Economies of Scale
From an economic perspective, foundational models offer significant economies of scale. Training a large-scale AI model is resource-intensive, requiring substantial computational power and energy. Once a foundational model is trained, however, it can be fine-tuned for various applications at a fraction of the cost of training a new model from scratch. This cost efficiency is especially beneficial for small and medium-sized businesses, which may lack the resources to develop their own AI models.
Democratizing AI
Foundational models are also playing a critical role in democratizing AI. By offering pre-trained models that can be fine-tuned for different tasks, organizations developing foundation models are making it possible for more people and businesses to leverage the power of AI. This can drive innovation and competition, leading to better products and services and fostering economic growth.
Accelerating AI Research and Development
Foundational models are not only useful in practical applications but also serve as valuable tools for AI research and development. They can be used as benchmarks to measure the progress of AI technology and to explore new methods and techniques in machine learning. Moreover, the insights gained from training and fine-tuning these models can help researchers better understand the inner workings of AI, leading to more robust and reliable AI systems in the future.
Mitigating AI Risks
Despite their potential, foundational models also pose risks, such as the generation of harmful or misleading content. By focusing on these models, researchers and developers can concentrate their efforts on mitigating these risks. For example, they can develop better methods for detecting and preventing harmful outputs, and they can work on creating more transparent and accountable AI systems. This risk mitigation is a critical aspect of ensuring the responsible and ethical use of AI.
The rise of foundational models represents a significant milestone in the evolution of AI. Their flexibility, economies of scale, and potential to democratize AI make them a powerful tool for leveraging AI. However, to fully harness their potential, it’s crucial to continue researching and addressing the risks associated with these models. As we move forward into the future of AI, foundational models will undoubtedly play a pivotal role in shaping our digital economy. Let’s embrace this future with both anticipation and a sense of responsibility, ensuring that the benefits of AI are shared widely and equitably.
tl;dr – Large Language Models (LLMs) like GPT-4 are transforming our sociocultural interactions, pushing technological boundaries in AI, creating economic shifts through automation and new job roles, raising environmental concerns due to energy-intensive training, influencing political landscapes potentially through propaganda generation, and posing new legal questions about content responsibility and copyright. As we leverage these powerful models, it’s crucial to navigate these challenges responsibly, ethically, and sustainably, ensuring a future that aligns with our shared values.read more
We started on a project tonight to build a computer vision model that will classify a few family members including the dogs. We used Teachable Machine to get a model built, and will now be exporting a Keras model to run in TensorFlow. Love teaching the kids the power of AI. More to come…read more
Fun little project his weekend, building a object detection and classification solution for less than $100. Though this pic only shows “person” and “book” classifications, the model can classify some 90 objects! The Tensorflow Lite model is running on a 4GB Raspberry Pi 4 w/ 128GB Sdcard. The camera is a Arducam, which I need to work on the resolution for but it didn’t impact the detection or classification, and ran at ~2.0 fps. Running on a Pi I have a give and take between model performance and accuracy, given the limited resources, but will push to see how resource hungry a model I can run on it. More to come…
Wondering what’s real about artificial intelligence? Today on BrainTrust LIVE, we’re fortunate to have Cynthia Holcomb, founder/CEO of Prefeye, and Shawn Harris, Customer Partnerships & Strategy, SmartLens — two retail practitioners who are working with their clients on real A.I. solutions. They’ll give us the lowdown — more specifically, on how retailers can currently use AI for personalization, the limitations that are frustrating them at present, and what does the future holds.
IntroductionHypeLimitationsBiasAdversarial attacksImpact on developing economics and jobsA realistic viewGoldilocks rule for AI:Too optimistic: Sentient/AGI, killer robotsToo pessimistic: AI cannot do everything, so an AI winter is comingas opposed to the past, AI is creating value today.Just right: Can't do everything, but will transform industriesLimitations of AIperformance limitations. (limited data issues)Explainability is hard (instructible)Biased AI through biased dataAdversarial attacksDiscrimination/Bias BiasesBias against women and minorities in hiringBias against dark skinned peoplebanks offering hiring interest rates to minoritiesreinforcing unhealthy stereotypesTechnical solutions"Zero out" the bias in wordsUse more inclusive dataMore transparency and auditing processesMore Diverse workforceAdversarial attacksMinor perturbation to pixels can lead and AI to have a different B output.Adversarial defensesDefenses exist; incur some performance costThere are some applications that will remain in an arms race.Adverse uses of AIDeepFakes, fakes can move faster than the truth can catch upUndermining of democracy and privacy, oppressive surveillanceGenerating fake commentsspam vs. anti-spam, fraud vs. anti fraudAI and developing economiesAI will eliminate lower rung opportunities. The development of leapfrog opportunities will be required. Think how countries jumped to mobile phones, mobile payments, online education, etc.US and china leading, but still a very immature space.Use AI to strengthen country's vertical industries.More public-private partnershipsinvest in educationAI and JobsAI is automation on steroids.SolutionsConditional basic income: provide a safety net but incentivize learningLifelong learning societyPolitical solutionsConclusionWhat is AI?Building AI projectsBuilding AI in your companyAI and society
IntroductionStarting an AI projectWorkflow of projectsSelecting AI projectsOrganizing data and team for the projectsWorkflow of a machine learning projectHow do you build, say a speech recognition engineKey Steps:Collect Data: people saying "Alexa", and other wordsTrain model: learns A to B mapping... audio clip to "word"many iterationsDeploy the model: implement in to a smart speakerWill collect new data (get data back), to maintain /update the modelHow do you build, say a self driving carKey steps:Collect Data: images - > positions of other cars, draw rectangles around carsTrain model: need to iterate and precisely identify carsDeploy model: may learn that golf carts are identified and positions well. keep iterating.Workflow of a data science projectoutput: actionable insightsOptimize a sales funnelKey steps:Collect Data: where are people coming from, time of day, machines type, etsc...Analyze the data: Iterate many time to get good insights insights from the data collected.Suggest hypotheses/actions: Deploy changes, re-analyze new data periodically.Optimizing a manufacturing lineKey steps:Collect Data: clay supplier, mixing time, ingredients, lead times, relative humidity, temperature, kiln duration, etc...Analyze the data: Iterate many time to get good insights insights from the data collected. Suggest hypotheses/actions: Deploy changes, re-analyze new data periodically.Every job function needs to learn how to use dataUse data to optimize workflows through data science based analysis, and to take on tasks with machine learning (remember less than a second), Inputs (A) to Output (B).From Sales, recruiting, marketing, to agriculture, and beyond DS and ML are having huge impactsHow to choose an AI projectBring together a cross-functional team knowledgeable in AI, plus domain experts.Brainstorming framework:Think about automating "tasks," vs automating "jobs."what are the main drivers of business values?What are the main pain point in your business?Note: you can make progress without big dataHaving more data almost never hurtsData makes some business [Google, Facebook, Netflix, Amazon] defensible.With small datasets, you can still make progress. The amount of data you need is problem dependant.Due diligence on projectWhat AI can do + Valuable for your businessTechnical diligenceCan AI system meet desired performance (e.g. accuracy, speed, etc)How much data is need to meet performance goalsEngineering timlineBusiness diligenceCurrent business: Lower costsCurrent business: Increase revenue ( getting more people to check out)New business: New product or business*Ethical diligence*money vs impact on societyBuild vs. buyML projects can be in-house or outsourcedDS projects are more commonly in-houseSome things will be industry standard, avoid building those."Don't sprint in front of a train."some times it makes sense to adopt another's platform or approach than to build your own. resource constraints, capability constraints...Working with an AI teamSpecify your acceptance criteriaGoal: defects with 95% accuracy...How do you measure accuracyTest Set (n1000): labelled training dataset to measure performance. Training Set: Pictures with labelsLearn mapping from A to BTest Set: Another data set to test the mappings. Often more than 1 test set will be requested.Pitfall of expecting 100% accuracy. Discuss with AI engineers what's reasonable.Limitations of MLInsufficient dataMislabeled dataAmbiguous labelsTechnical Tools for AI teamsCPU vs. GPU [Great for deep Learning/Neural Networks] NvidiaCloud vs. On-prem, ....Edge [Processor, where data is collected.]
In this episode of The IoClothes Podcast, we speak with Shawn Harris, Global Innovation Strategy Lead for Zebra Technologies. The reality is, innovative products don’t just sell themselves and companies aren’t composed of just designers, developers and engineers. Someone has to interface with the customer, and keep the ship sailing along a strategic path, which includes profitability (that’s if you want to stay in business). Today, we shift gears and talk a bit about the struggles of retail, the importance of differentiating yourself in the marketplace and how are current relationship with MS Excel may be a sign of the future!