How Generative AI is Changing the Pace of Corporate Management
In July, I introduced on this blog the content regarding "Future AI" on the morning news program "Morning Satellite" on TV Tokyo, which covered AI topics. This time, I would like to introduce the theme of generative AI and corporate management, which is being featured in the “Morning Satellite’s “Professional Eye” corner.”
This time, the commentator is Ryoya Nakamura, who leads the generative AI business at LayerX (selected as one of Forbes JAPAN 30 UNDER 30 in 2023). Based on his insights, we will organize viewpoints and implementation procedures that are helpful for managers' decision-making, while also considering realistic challenges.
In this article, I would like to briefly introduce the following themes based on the content that Mr. Nakamura discussed.
- What definitively differentiates this AI boom from past ones (and the remaining challenges)
- The true impact of generative AI = the meaning and limitations of "speed"
- The potential and risks of conquering the "upper right white space" unique to each company
- The practicalities and pitfalls of context engineering / AI onboarding
- The current state of AI agents and realistic expectations
- Core actions that management should take right now
Now that human-AI collaboration has become the same as "mass-hiring a large number of new employees who work 24/7," is your company a workplace that is easy for AI to work in? And are those expectations realistic?

Table of Contents
- How is this AI different? Let's consider the definitive evolution and remaining challenges.
- What is the biggest impact of generative AI? We will consider the possibility of increased speed and the constraints that come with it.
- Expanding to “the space in the upper right”: Opportunities and implementation difficulties.
- The reality of context engineering and AI onboarding.
- The Importance Of Understanding Progress And Limitations
- What Management Should Do Now: A Strategic Approach and Prudent Investment
- Summary: A Balanced AI Strategy.
1. How Is THIS AI Different?
There were two major reasons why previous AI struggled to gain traction in business settings:
- Dedicated data training was required, resulting in high startup costs.
- Data formatting was complex and vulnerable to inconsistent formats in the field.
Generative AI has significantly improved these areas:
- Pre-training equips it with “general knowledge and abilities” (e.g., it can extract and interpret key points from financial statements without any additonal training).
- It can read and integrate disparate data such as text, PDFs, emails, and meeting minutes.
- With RAG (Retrieval-Augmented Generation), it can handle company-specific data without pre-training.
However, important challenges remain:
- Data quality, governance, and system integration remain significant bottlenecks.
- According to the Stanford AI Index 2024 report, most organizations cite reliability, safety, and data challenges as major barriers.
- Productivity gains are uneven and often remain in the pilot stage.
As a result, when properly implemented, it can be expected to have the effect of a “newly hired employee who is immediately capable,” (by the way, this is the usually used expression in Japanese to explain the capabilities of AI) but the reality is that many companies are facing implementation barriers.
What we’ve learned is that while AI capabilities have improved significantly over the past year, the expression used by Mr. Nakamura in the program, “from the level of an undergraduate student to that of a doctoral student,” is metaphorical. Improvements in benchmarks do not necessarily translate to increased reliability in complex real-world tasks. Its use in contracts, legal affairs, and CS is effective as an assistant under appropriate supervision, but complete automation still presents challenges.
For us in Kafkai, we are effectively using generative AI for coding, but releasing it to production without checks is unthinkable. There are still many mistakes written by the AI such as wrong assumptions to the logic, which are partly cuased by the AI itself and partly by our lack of clear instructions.
2. What Is the Biggest Impact of Generative AI?
While improving operational efficiency is important, the greatest value lies in increasing speed.
Examples of Speed Improvement
- Customer Support: According to NBER research, generative AI has achieved a 14% productivity improvement in customer support
- Development Operations: Experiments with GitHub Copilot showed an average 55% acceleration in developer task completion
- Contracts & Review: The time required for clause comparison, difference explanation, and risk identification can be shortened.
- 24/7 Operation: Zero time difference, and tireless assistants are always available.
However, there are also realistic constraints.
- According to the MIT Sloan/BCG survey, only 10-20% of companies are realizing significant financial benefits from AI.
- Scaling and process redesign remain bottlenecks.
- LLM hallucinations and instability can also delay decision-making.
To put it interestingly, as Mr. Nakamura says on the show, it’s like being able to instantly hire dozens of 24-hour-a-day new employees. However, these new employees require appropriate guidance and supervision, and you can’t entrust them with all tasks. Successful cases are certainly accelerating the decision-making cycle, but the desired effects may not be achieved depending on how it is implemented.
3. Expanding Into The "Upper Right Space"
Traditional software excelled in standardizing and productizing similar business processes across companies such as sales & marketing (S&M) and human resources (HR). Conversely, business processes with a lot of company-specific tacit knowledge and exception handling remained untouched and expressed by Mr. Nakumra as the “upper right white space.”
Potential of Generative AI
- Ability to adapt to company jargon, internal rules, customer context, and past cases.
- Enabling explainability for exception handling and irregular responses.
- Flexibility to learn "the way we do things."
Competition won’t disappear even if everyone uses AI; in fact, differentiation may become even stronger. The white space is an area of high company specificity, and AI optimized for this area can become a potentially powerful barrier to entry (knowledge capital).
However, there are also implementation challenges
- Data quality issues remain a significant challenge
- Long-context limitations (the “Lost in the Middle” problem) can lead to information being overlooked when processing large volumes of documents.
- Fine-tuning and continuous adaptation are often necessary for high accuracy and high reliability in domain-specific applications.
Practical Approach
Early efforts are important, but first-mover advantage is context-dependent. The key is not to rush into implementation, but to focus on organizational learning and gradual expansion.
In short, it's important to look at many examples and think carefully before taking action.
4. The Reality Of Context Engineering And AI Onboarding
As mentioned earlier, AI joins a company as a “new employee who knows nothing about the company.” How it is nurtured will determine the results, making the following two points decisively important.
Context Engineering
- Designing what, in what order, and in what format to provide to the AI to maximize results.
- Elements: Role definition, purpose, procedures, evaluation criteria, examples (few-shot), prohibited matters, and priority of reference data.
- Establish a foundation for RAG to enable constant access to the latest documents.
AI Onboarding
- Similar to training new human employees, gradually increase the difficulty and shorten the feedback cycle.
- Design tool integration and permission granting in stages.
- Evaluation: Task success rate, validity of evidence provided, response time, and escalation rate.
While keeping these important points in mind, it is necessary to consider these realistic constraints.
- Even with RAG, confident errors can occur due to search failures or model non-compliance.
- Simply increasing the context does not guarantee accurate utilization due to the limitations of long context.
- Quality assurance will be improved, but it will not be perfect.
Although there are many technical terms such as RAG and few-shot, what I want to say is that prompts are not magic spells, but expressions of business design. A good prompt is synonymous with a good standard operating procedure. However, it does not guarantee perfect results.
5. The Importance Of Understanding Progress And Limitations
To recap: AI agents are small programs that handle tasks broken down from one large task into smaller ones. They are moving away from the “step-by-step” approach of completing large tasks using extensive prompts, towards "thinking for themselves, trying, struggling, and solving problems.
While the progress of AI agents is certain, a realistic evaluation is crucial.
Currently, a typical workflow for AI agents might look like this:
- Task decomposition → Planning → Tool execution → Self-evaluation → Loop of improvement
- Automation of investigation → Summarization → Contradiction detection → Drafting → Review → Submission
- Maintaining long contexts, traversing multiple documents, and automating retries.
With the advancement of Large Language Models (LLMs), the quality of such tasks is expected to improve further.
However, there are still realistic limitations that remain.
- When it comes to Large Language Models, it is impossible to avoid the fact that agent reliability is still limited
- Vulnerability, recovery from tool failures, and cost management remain unresolved issues. This is normal for all software that we make.
- As mentioned earlier, safety in autonomous environments and the risk of hallucinations continue to be ongoing challenges.
Cases are increasing where quality is significantly improved by dividing roles among multiple agents (e.g., a strict reviewer and a creative drafter pair). However, they are still far from being fully autonomous and require appropriate supervision and constraints. While internal work results are one thing, mistakes are unacceptable for anything to be published externally.
In our service, Kafkai, we display a message saying, "Be sure to have a human review the output." It’s the same as how a human wouldn’t even consider releasing something they wrote without having it checked first.
Expectations and Reality for GPT-5
OpenAI has announced the development of a next-generation model, aiming to improve reasoning and safety. However, specific performance improvements are only speculation. There are constraints in data and computation, and improvements in benchmarks may not necessarily mean improved actual autonomy. Expectations should be moderate, and verification should be conducted carefully.
6. What Management Should Do Now: A Strategic Approach and Prudent Investment
Combining the “things to do now” that Mr. Nakamura emphasized with realistic success factors, here’s what it might look like:
Three Foundational Elements to Tackle Immediately
- Building an environment where all employees can safely access cutting-edge tools.
- Investing in the acceleration of security assurance.
- Formulating an AI strategy at the management level (business and organizational transformation).
Why Now?
- There's a learning curve for AI, both individually and organizationally, and starting early allows for compounding effects.
- Reallocating personnel after streamlining by department and designing new value creation requires management leadership.
- Organizational learning and complementary investment differentiate successful companies (though a bit dated, see this McKinsey study).
Realistic Success Factors
- Recognizing that many pilots fail to scale, invest sufficiently in governance and integration preparation.
- Considering that rapid model commercialization and open-source options can erode early competitive advantages.
- While the urgency of “doing it now” is important, rushing ahead without adequate preparation can actually increase the risk of failure.
What this means is that in fast-moving companies, employees are experiencing a return of their time to value creation. This fosters organizational confidence and unity, becoming the driving force for transformation. However, this experience is only realized through proper implementation and continuous improvement.
Summary: A Balanced AI Strategy
I believe if you're in management, you'll know and understand this but let me just repeat it here again:
The introduction of generative AI is not a silver bullet.
However, it is extremely important for organizations to create an "AI-friendly environment" in order to build a competitive advantage. While the technological possibilities are certain, realistic challenges such as the time from introduction to effect creation, technical limitations (hallucinations and reliability, complexity of integration), and organizational resistance to change and learning costs must also be fully considered.
Balancing expectations and limitations, combined with strategic formulation at the management level, expanding opportunities for all employees to gradually interact with AI, and continuous investment in security systems (which wasn't covered in this article), as well as setting realistic expectations, will be the key to establishing a sustainable competitive advantage.
Ultimately, while speed is important, the direction is even more so. By maintaining a balance of optimism and caution, continuously improving security systems, and providing all employees with opportunities to interact with AI while receiving appropriate guidance, we can unlock the true value of generative AI. Through a careful and strategic approach, companies will be able to maximize the potential of AI and forge a path to the future.
You might also be interested in these articles too