Monday, 20 April 2026

Smart Glasses: AIsplaining, Useful, Creepy, but not Transformative

 

Back in December I bought a pair of Meta Oakley Vanguard wrap-round AI glasses to try out and be able to talk about with clients and at my January Tech Trends seminars. I hadn't had much opportunity to wear them before those events due to the terrible weather, but since then I've been wearing them regularly. Here are my conclusions.

Fit and Wear

First of all these are great sunglasses - the optical quality is superb. I opted for the golden color and it gives a glorious, bright shine to any view, enhancing perceived colors. My eyes are well protected from wind, dust, and pollen. The electronics do not add significantly to the weight and sit comfortably even on my large head.

Voices in my Head

The audio quality is surprisingly good, without compromising being able to hear what is happening around you. I do not listen to music when out and about as I like to hear my environment. I prefer the sound coming from near my ears rather than having something in or over them. I can, however, imagine circumstances when listening like that would be pleasant, for example relaxing on a sun lounger. The only problem I have come across with streaming music is that it occasionally just starts randomly. I have yet to figure out what causes this as it does not happen that often, and inevitably only when I am busy engaged in something else.
Meta AI now offers three voices here in the UK: Judi Dench, a man, and a disdainful young English female. Dame Dench is definitely the best option. Purely in terms of synthesis quality they are all remarkable, but fall apart hilariously when trying to pronounce foreign words and names.
While I am not a huge fan of voice interaction, I have found that there are certain good applications when using the glasses. Classic assistants such as Alexa and Nest Home end up with the three Ts: Tunes, Trivia, and Timers. Smart glasses risk ending up with some similarly simple activities, for example asking the time, or requesting running stats without having to look at a smart watch. Much of the time it really is that trivial, but hearing new messages read out does have an appeal, as does replying verbally. It is much less silly than talking to your watch which was one of the early applications of smart watches.

Camera

It is really sad that the camera has been demonized by a few creepy individuals as it really is useful. I take many photos with a wide variety of devices, ranging from absolute top-end equipment down to the wonderful Kodak Charmera. The camera on the Oakley glasses is in the mid-to-low quality range, similar to a low-end phone, but it is the ability to capture pleasant views while doing something else that makes it interesting. Noticing the light hitting a building while out running and being able to capture the view without breaking stride is not earth shattering, but is pleasant. On the other hand, the ability to quickly capture things as an aide memoire without pulling out a phone is useful.
I have used the camera to record some POV Parkour, but the field of view is not wide enough to make it engaging. I tried recording some hyperlapse while running, only to discover that I am apparently looking all over the place resulting in very jerky footage, but with more careful planning it could make some quite good material. Sadly I think the best use of video is recording conversations with people which has some positive applications as well the creepy ones, or capturing incidents, but there are many better use cases.
The camera really excels when combined with AI. The potential as an assistive technology for people with reduced vision is enormous. You can ask the glasses to tell you what you are looking at and it replies with remarkable accuracy. Despite one of my novels being called The Botanist, I struggle to identify flowers, plants, and trees, so I was delighted to be able to look at a flowering tree and say "Hey Meta, What kind of tree is this?" and get good answers.
Another prime use case is being able to read signs and translate other languages. I need prescription glasses to read now, but the smart glasses are not corrective, so the ability to read out the words on a plaque at which I am looking is fabulous. I have also tried translating local shop signs in Chinese and Arabic and the translation corresponded to the English names of the shops, which is encouraging. I did make sure that the glasses were not reading the English. On the other hand, the glasses dangerously misinterpreted a road sign at first but corrected themselves and apologized. They also used the American term "traffic circle" instead of the UK "roundabout" which might confuse someone who does not spend time in the USA. 

AI Capabilities

It's a good job that Bluetooth normalized wandering around apparently talking to yourself as it is remarkably easy to get into discussions with the Meta AI. Unfortunately quite a few of these degenerate into fights about how and what to do. Meta AI seems very eager to tell me how to do things on my phone. I am going to call this "AIsplaining." While I had the dismissive young woman's voice enabled she told me that I'd received a long WhatsApp message from my wife, read out part of it, and then bluntly told me to read the rest on the phone. Rather then telling me that I could get stats from my Garmin watch by asking for them individually - "Hey Meta, what is my pace?" - the AI denied strenuously for many minutes that there was any integration with Garmin which was very odd. There is, apparently, some way to take a photograph with the stats nicely inserted into them but I have yet to figure it out.
Another experience was trying to get the glasses to read out a message a second time. The AI adopted a positively hostile tone and kept telling me to read it on my phone. Very odd, especially since it had read the message out a couple of minutes early. Also I could not send a message in French, it was as if I was saying nothing. I have had incidents of AIsplaining when replying, but when it works, replying to a message in English works well and can be done while your hands are occupied, wet, or dirty.
The most impressive AI capabilities are definitely the ones associated with the camera. I even tried asking the glasses where I was, knowing that there is no GPS embedded in them, and they were able to use the distinctive view to say that I was in Holyrood Park in Edinburgh based on the view of the crags.

Conclusion

First of all, I don't think anybody will wear smart glasses if they neither need prescription glasses nor if there is bright sunshine. It is just too much of an imposition. However, if you do wear glasses, the added capabilities are useful "nice to have" features for most people, not more. They are not transformative in the way that smart phones are, or even as useful as the more discrete smart watch. They could, however, be completely life changing for those with impaired vision, especially using the "Be My Eyes" service.

Tuesday, 15 August 2023

A Prescriptive Path for Digital Resilience

The four-step process shown in this picture was used throughout Splunk’s recent .conf23 user conference. It was used with different titles, different messages before the arrow swooshing upwards, and with different elements within the product suite linked to the segments. The objective is to show a journey of continual improvement by enhancing organisational ability to respond to problems.

Capability Maturity Models

A prescriptive path is essentially a Maturity Model, a mechanism defined in 1986 to measure organisations ability to develop software reliably. This original model was developed at Carnegie Mellon for the US military as a means of assessing software suppliers. This classic model has five steps:

  1. Initial (sometimes called disorganised or chaos)
  2. Managed
  3. Defined
  4. Quantitatively Managed
  5. Optimizing

Most organisations were at level 1 — and most of them still are. The objective was to ensure timely delivery of large-scale software projects that met its requirements. Despite many people building complex processes, the sad reality is that building large software projects is genuinely difficult. Even when processes are applied rigorously software grows organically and despite everyone’s best efforts results in a Big Ball of Mud. Generally these failures are project management antipatterns, however there is also a parody “immaturity” model which rings all to true.

A Maturity Model to Digital Resilience

The original maturity models were driven by the desire to deliver hugely complex software systems from sprawling specifications; the classic waterfall model. As such it is largely based on managing projects, things that have beginnings, middles and ends. A maturity model for digital resilience, however, has to be designed for continuous operation and to have objectives that are as clear as delivering functionally on time.

While the proposed journey reads well and ties into the company’s range of products, underneath it lacks the logical progression of the typical maturity model. Admittedly the term maturity model isn’t used, but the intent is clear. The four steps in the prescriptive path are:

  1. Foundational Visibility
  2. Prioritized Actions
  3. Proactive Response
  4. Optimized Experiences

The first three make sense, but the leap to optimization is premature, and different versions of the chart off different forms of optimization. For the model to work across the whole product range, it needs to encompass the goals of all. It also needs to bridge the gap to optimization; while active response is important, in itself it doesn’t cover the gamut of resilience.

Building a true Digital Resilience Maturity Model

As I have already mentioned, all organisations need to consider their digital resilience. A maturity model is a great way of providing a structure to assess and achieve that. It could even be used as a form of certification. As I’ve been considering Splunk’s prescriptive path over the last few weeks I’ve been thinking about what the ultimate form of this model would take and to whom it would apply. Ideas forming but not yet ready for publication.

Of Course There Was AI at Splunk .conf23


Every product announcement in 2023 is required to include AI. It’s the law. However most of them are vague, fluffy and just to keep the investors happy. Only a few offer meaningful benefits. Splunk’s new offerings are in this latter category, eschewing wild claims and big noise, but respecting the importance of humans and the real issues that SREs face in their daily working lives. And refreshingly they are not all generative AI.

Not AI Newbies

The new AI capabilities were presented by Min Wang, who became Splunk CTO in April 2023 after five years as an ED at Google where she worked on the Google Assistant. While that consumer-focused product may seem adjacent to SEIM, that’s not the point. It’s the experience of rapidly and easily assisting people that counts, augmenting their productivity.

This is also not Splunk’s first sortie into machine learning and AI, having launched Splunk ML in 2015. This means that the company not only has deep AI-related experience, it already has a mass of data on which to draw, whether for threat and attack detection, automation or analysis. This means that Splunk have domain-specific models built from a solid base, unlike a generic tool such as ChatGPT which lacks any contextual knowledge.

Responsible AI, Really

The current boom in AI has lead to a crazy amount of AI washing, irresponsible releases and endless foolishly exaggerated claims. There are a number of basic rules that should be applied to launching any AI product, and Splunk is one of the few organisations actually following them.

The first rule of responsible AI is to always maintain a human in the loop. All too often cost-fixated management try to use AI as a means of eliminating staff, often with catastrophic results. Since security and incident management is a cost that doesn’t directly drive revenue, all too many executives will consider it a grudge purchase.

This means that the Splunk AI tools do not take action directly, always confirming via a human what action, if any, should be taken. This approach maximizes the likelihood of the correct action being taken by combining human insight with the scale of machine detection, while minimizing the risk of humans missing something and machines jumping to the wrong conclusion.

Explain Yourself, Machine

Another rule that should be respected by AI systems but all to often is not even considered is auditability. This means the ability for the machine to explain why it did something or recommended a particular course of action. Just like with a human, you can ask the reasons behind choices. This is particularly important when there is a chance of legal exposure or an insurance claim, where every little detail of an incident will be dissected and examined.

The Splunk AI Assistant is exemplary in this aspect. Its main purpose seems to be helping SREs write custom filtering code in SPL2. Most AI-enabled tools which simply belch out a bunch of stuff and expect the human to check if it is correct.

The Splunk AI Assistant uses a conversational approach to go way beyond this. The user can ask it to create a query in plain language (presumably only English at present), and the tool returns the code with a line-by-line explanation of what it does and why. This serves three vital functions:

  • Providing an audit history that explains why the actions were taken.
  • Allows experienced engineers to spot errors.
  • Trains junior engineers in developing SPL2 scripts.

Most of us would say that handling an outage of any kind is a learning experience, with the AI Assistant Splunk can turn any operation, routine or emergency, into an educational opportunity. This helps those who may be experienced operators but lack programming skills to develop an understanding of scripting. And given the shortage of staff in the area, this is a useful tool for skills transfer and onboarding new SREs.

Revenge of the Command Line

I found it amusing that amongst all the gorgeously designed, highly-visual user experience tools offered by Splunk, the latest and greatest tool was essentially a return to a command line. Chat interfaces, however, turn the traditional command line interface on its head as the AI Assistant learns what the user wants and needs, not the reverse. No more figuring out the right command and its options, just use plain language.

AI and ML Transfusion

ML and AI capabilities are being rolled out across security, observability and platform Splunk products. Given the existing ML heritage this is no sudden move to keep the vultures on Wall Street happy, but part of a longer roadmap to AI-enable a broad range of capabilities. This careful approach enhances the product line appropriately where AI and ML will add the most benefit to security and reliability professionals.

Big Picture Thinking at Splunk .conf23


What technologies and capabilities, especially new ones, are behind Splunk’s anchor message Building Digital Resilience at their recent user conference, .conf23? The good news for Splunk and its customers is that there was plenty to back up the message. Even better, the new capabilities were wrapped in a message on tool consolidation and collaboration within teams. I’ll call that big picture thinking because it embraces a whole raft of different roles and circumstances where the product capabilities can be deployed.


Bringing IT Together

There were numerous small features announced that clearly delighted the Site Reliability Engineers (SREs) in the audience who whooped in delight. One feature greeted with applause and shouts was unified identity for accessing both Splunk Cloud and Splunk Observability Cloud data. While you would probably have expected it to exist already, this demonstrates that the products are coming together to form a single suite.

Unifying SecOps on a single work surface is the objective of the latest iteration of Splunk Mission Control, and we saw some very nice demos. The user experience is delightfully consumer grade, which is important for many reasons, not just aesthetics:

  • It helps bring new people into the world of SRE by reducing the learning curve, a theme that applies to several of the new features.
  • It reduces error rates by making processes and information easy to follow and key data immediately visible.
  • It eliminates switching between multiple different experiences, accelerating time to finding the trouble.

There really is no need for enterprise software to be dull and grey any more. Progress. Mission Control brings together Splunk Enterprise Security, Splunk Attack Analyzer, and Splunk SOAR. Combining these critical services reduces stress during incidents, and allows for teams to work more easily together normally.

Reflecting Hybrid Reality

The reality of digital infrastructure is that it is fragment, poorly understood, and barely documented. Most organisations rely on a mix of on-prem systems, potentially in multiple locations, multiple cloud providers and a mix of SaaS products. This fragmentation clearly adds to the complexity of running the estate, as well as creating many more edges than were previously present with the potential for increasing the attack surface.

Two quotes from Splunk CEO Gary Steele’s keynote stick in my mind: “you can’t secure what you can’t see” and “you can’t operate what you don’t know exists.” Documentation, where it exists, is usually wrong, sometimes dangerously so. The extension of technology into every branch of business means that things are constantly being changed, with the decentralization of IT budgets resulting in a proliferation of applications, devices and vendors.

This organic growth is why observability is so important, however the long and passive-sounding word observability itself doesn’t help. All too many IT people are reluctant to admit to not knowing what is on their network and most management are clueless about the whole thing. Observability deserves a dynamic upgrade with a more active, even aggressive, title.

Owning the Edge

Edges, furthest from central control, are all too often where things go wrong. As mentioned above, current fragmented, federated IT estates introduce many more edges over which data has to flow. Responding to this need, Splunk has launched two products.

The first is Edge Processor, a software appliance that implements data transformation pipelines for ingesting information. It uses the second generation of Search Processing Language (SPL2) which allows for continuity across the platform and reusing the

The second is Edge Hub, a hardware appliance. Yes, hardware. Splunk is working with partners to deploy this device. It’s small but heavy, with a chunky heatsink on the back, but a surprisingly bright touch-sensitive display panel on front. This is not your classic hardware interface, but very much a consumer-grade experience. I am curious to know how this will go down with gnarly industrial engineers.

The device is a veritable Rosetta Stone of industrial protocols, as well as having built-in detectors for temperature, vibration, sound and video. These are designed to support a wide range of operational applications, ranging from monitoring cabinets in a data center or the full industrial spectrum of conveyers, pumps and similar.

The objective is to connect currently unconnected devices, essentially bridging the OT and IT worlds, allowing OT data to be added to the overall pool for analysis. This has the potential to detect all sorts of new trends and opportunities for optimization. It will be very interesting to see what can come from this, I can see immense possibilities for sustainability initiatives.

On the other hand I can see the old antipathy between OT and IT resulting in disagreements and less than optimal implementations. This is where the partners come in. Working through partners is a wise move from Splunk. The selected partners already have the trust of the OT world, with the serious operational credentials that Splunk does not.

Monday, 7 August 2023

Building Digital Resilience

Digital resilience is a critical topic about which I've already written and spoken. Digital resilience should be a concern for every business as everything in every business now depends on digital technology. Unfortunately most business management are ignorant of their total dependency on something they fundamentally don't understand. It's just something that IT does, right? No, wrong. Completely and dangerously wrong.

I was recently a guest at Splunk's .conf23 user conference, a fabulous confection of announcements, technical sessions, customer stories, fez-wearing MVPs, and a giant inflatable pony called Buttercup. Splunk has grown from log analysis into what is now called SIEM, allowing technical teams to find what has gone, or is going, wrong in their networks, especially during security incidents.

While Splunk is a leader in SIEM tools, a growing part of Splunk's toolset is observability tools where it's leadership has also been recognised. These help businesses know what devices and applications are on their network, what data is flowing where, potential vulnerabilities, and other information that a naive  business person might think were well known. The sad reality is that most businesses of any size have only a rough idea of what systems they have, how they are connected, and which parts are most important. Of course there's documentation, but it's out of date and mostly wrong. Some key individuals know in what way the documentation is wrong, but don't have the time to correct it; after all they are key individuals. Sometimes they don't have the inclination either, and prefer keeping that knowledge to themselves.

Clearly SIEM and Observability belong together, with observability providing essential context for security and other incidents. I was, therefore, delighted to find that the big theme of Splunk .conf23 was how the combination was helping Build Digital Resilience. I will be writing about some of the details of new announcements in subsequent posts, but I was really delighted to find the message of Digital Resilience at  the core of the vision presented by President & CEO Gary Steele.  This was reinforced by Gretchen O'Hara, VP of Partnerships and Alliances, who called digital resilience a board-level topic.

Unfortunately all too few companies share her enlightenment. While legal and financial non-executive directors are the norm for many boards, the are precious few that see the importance of having independent technical expertise available at this level. The rash of ransomware and other cybersecurity incidents demonstrates the critical nature of our digital infrastructure. It's time for the importance of digital resilience to be fully recognised with a seat on the board.

Thursday, 9 February 2023

Would we even notice a digital pound?

 

With the announcement that the Bank of England is consulting on the creation of a Central Bank Digital Currency (CBDC) I thought I would write my thoughts about it.

Read the post here.

Tuesday, 1 November 2022

Building Digital Resilience

 Last month I presented at the 7th CENSIS Tech Summit in Glasgow. Here's the video of me speaking: