Initially, this was intended to be a blog post on the types of decisions we have to make and what can be delegated to an AI Agent. My thoughts had gone to micro decisions, immediately verified decisions, and one or two-way door decisions.

However, after some time to reflect and gain perspective on the start of this year, I’ve decided to write about two significant decisions from the first part of this year:

  1. Deciding on a Next.js experiment
  2. Trusting a human with my most important identity documents

These may seem like strange blog-fellows, but bear with me.

The first decision relates to a proof of concept for Agentia’s Chat/AI interface, while the second came at the end of a saga where my digital identity had been invalidated.

Let’s start with the drama! I’ll cover my Next.js experiment in the second part of this blog post.

A Digital Identity Crisis

Around February/March, things started to go a bit strange for me. Setting up Agentia stalled. French bureaucracy has a reputation, and Brexit is a gift that keeps on giving, so I initially thought it was related to that.

After many circular conversations, April brought the news that my visa was reported lost or stolen. With the visa in my hand and knowing I hadn’t reported anything, I was perplexed.

I was also setting up some training at this point, which got bounced back because my national digital identity had been revoked!

Now I was stressed. I had a new watch to help track my fitness program, and my HRV was lower than it had been since I started intermittently tracking it in 2017.

I spoke to La Poste, and they confirmed my visa was invalid. This had automatically revoked my digital identity.

A Leap of Faith

I headed straight to the Prefecture, which is only open to the public without reservations on Monday and Tuesday mornings. It was Wednesday, and you can’t simply make an appointment - you have to come during open hours to create one.

That morning, I had meditated for about 45 minutes and was feeling nicely detached from the panic. This allowed me to have a pleasant chat with the security guard. (Side note: the Prefecture is surrounded by prison-esque metal bars.)

After about 20 minutes of chatting, he offered to take my visa inside to see if they could do anything. This meant handing over my visa and passport.

I decided to trust him. It was a nervous moment that lasted nearly half an hour.

He returned, saying it was all fixed. I felt overwhelmed, thanked him, and went straight to La Poste.

It didn’t work. I thought the security guard was just being nice and didn’t want to disappoint me.

Three days later, at my daughter’s gym competition, I got an email from La Poste informing me that my identity was valid again!

I still don’t know exactly what the security guard did, but the three-day delay aligns with my understanding of batch processing.

The Power of Human Connection

With hindsight, trusting the security guard was the single best decision I made in the first half of this year. Over three months of issues were resolved by a pleasant conversation and a kind human.

Reflections and Lessons Learned

I’m certain there are some strong learning points for me, and perhaps for all of us, in this experience.

For me, the outcome is great (having a young family in France makes being a legal immigrant crucial!). However, there has been zero transparency about what actually happened.

I’m left wondering if a human or a system made the initial decision to invalidate my visa. This lack of clarity reinforces my commitment to working on system observability. When decisions are made that significantly impact people’s lives, we need to ensure there’s transparency and accountability in the process.

Equally important, this experience has solidified my belief that humans must remain in the loop of decision-making processes. It’s never been a doubt for me, but now I have a concrete, personal example to support this stance.

How many people the security guard spoke to, I’ll never know. Likely one, maybe two or three if they had to ask who could help. While the technologist in me sees how this entire process could potentially be automated, there’s a… “je ne sais quoi” related to the importance of having humans involved.

I am certain that this resolution would have never occurred if I hadn’t just stopped and chatted with the security guard after he told me the office hours. His curiosity about my situation and openness to help… well, I’m immensely grateful!

The human element - not just the ability to empathize, but the compassion to step outside rigid protocols and to make judgment calls based on the nuances of a situation - proved invaluable in my case. As we continue to advance in automation and AI-driven systems, we must remember to preserve and value these uniquely human qualities in our processes.

This experience has given me a renewed mandate: to continue working on system observability while ensuring that our technological advancements enhance, rather than replace, human judgment and interaction.

Have you got a similar story? I’d love to hear about experiences where human intervention made all the difference, especially in situations where automated systems fell short.

(Stay tuned for Part Two, where I’ll delve into my Next.js experiment and draw some interesting parallels between technical decision-making and real-world problem-solving!)