Carbon & Silicon

<<< Back to All Articles

The up & downside of Persistent Assistants

artificial intelligence ethics April 10, 2026
Sandy_Summaries_-_Persistent_Assistants
4:00
 

Transcript: 

The next wave of AI is not just smarter chat. It is the rise of the persistent assistant: the system that watches how you work, learns what matters to you, and quietly builds a behavioral model of your life.

At the harmless end, this looks familiar. Gmail classifies your mail. Microsoft Copilot now explicitly offers memory and personalization based on your conversations, goals, recurring tasks, and work style. Perplexity’s Computer goes further, presenting itself as an agentic assistant that can use your computer to complete tasks, and now includes reusable “skills” that remember how you want repeated work done.

That sounds convenient, and sometimes it is. The system learns your rhythms, notices patterns, anticipates needs, and promises to save you time. It is no longer just answering questions. It is studying your methods.

What begins as convenience becomes infrastructure.

The real shift comes when these assistants stop being optional helpers and become the place where work is done. In enterprise settings, that means a portal through which communication, drafting, scheduling, retrieval, and decision support all flow. Once that happens, the platform is no longer just helping the employee. It is observing, timing, comparing, and standardizing. It can measure who completes tasks faster, which sequence of actions produces the best result, and which workflows can be turned into repeatable automation. Microsoft is already moving toward a more persistent memory layer inside Copilot, and Perplexity is positioning its enterprise product as a platform that orchestrates models, files, tools, and tasks in one system.

Then the question changes.

It is no longer: “Can this help me?”

It becomes: “Who owns what this learned from me?”

If an employee invents a faster workflow inside the system, is that the employee’s craft, or the company’s intellectual property? If the platform has watched six months of behavior and turned it into suggestions, shortcuts, and automations, can the worker take that behavioral layer with them when they leave? Usually not. The memory stays with the platform, and often with the employer.

The same lock-in risk exists for solo operators. If your assistant has learned your habits, priorities, writing style, and repeated processes over six months, switching vendors may feel less like changing software and more like amnesia. Your files may export. Your habits usually do not.

There is also a darker possibility hiding beneath the productivity pitch. These systems do not just help people work. They create the conditions for replacing people with the workflows extracted from their own behavior. First the assistant observes. Then it recommends. Then it standardizes. Then it automates. Then management asks why three people are needed for work now performed by one person plus the system.

This is why “persistent assistant” is not a neutral phrase. It names a new power layer.

The platform that owns the persistent assistant does not just own a tool. It owns memory, workflow, preference, attention, and eventually the operating assumptions of the work itself.