A speech by Bill Buxton in Dublin in 2012, which is worth reading even today, as most ideas about design are rather timeless, and so are some books. Enjoy!
It's been a while since I last visited Dublin or Ireland, with my last visit dating back to 1982. I remember climbing Carentoon Hill during that trip. Now, let's dive into the topic at hand.
In a recent survey, we asked how many of you are designers and how many are developers. But what about the rest of you? Are there any architects, students, CEOs, or management executives among you? How many of you are involved in the business or finance side of things? Ideally, we should have an equal representation of people from design, technology, and business disciplines. These are the three pillars necessary for any business enterprise in today's world.
Now, preparing for this talk was a bit challenging, not because I had nothing to say, but because I was trying to figure out what would be worth saying. I first used a computer in 1971 to compose music for a film soundtrack. That computer had a mouse, real-time sound synthesis, computer graphics, two-handed input, music notation, and a piano keyboard. It was easy to use, and I've spent the rest of my life trying to recreate that simplicity.
My first published work on pen-based input was in 1978, and we were working on multi-touch technology by 1984. This long history in the field has made me realize the importance of getting things right. We are on the cusp of a major change, with a new generation of technologies about to launch. The game is changing from simply making things work to creating smooth, elegant experiences. The challenge now is not just about making things, but deciding what to make and how to make it.
People often say that being at the beginning of a new technology or trend is advantageous because there's a lot of low-hanging fruit. But being first isn't what matters. What matters is having the commitment to see your ideas through, even if it takes 20 years.
Now, let's talk about the concept of beautiful apps. What does it mean for an app to be beautiful? We often say that beauty is in the eye of the beholder, but this phrase is biased towards visual aesthetics. As a musician, I find this offensive because beauty isn't just about what we see. The language we use shapes how we think and approach things, including making apps and starting businesses. So, let's broaden our understanding of beauty and apply it to our work.
In the realm of interaction design, the best work often goes unnoticed. This is because when a design is executed brilliantly, it becomes so transparent that it's almost invisible. I like to refer to this as "out-of-your-face" design, as opposed to interface design. It's elegant, it's seamless, and it's just there. People are delighted by what they're doing and they might even say, "Wow, that felt great." However, your work may not get featured in a design magazine because it doesn't photograph well. Design, after all, is often about looking cool in stylized photographs.
In the traditional design world, the focus is often on the visual aspect. But the true beauty of a design lies in the experience it provides. If you want to create beautiful apps, you need to consider not just the visual aspect, but also the feel, the smell, the hearing, the kinetics, the emotions, the culture, the mind, and all other factors that contribute to the user experience. Before we even begin to discuss beauty, we need to understand its dimensions. What are the aspects that everyone else is forgetting while focusing on creating something that looks gorgeous?
The essence of a design lies in its experience. For instance, if you give a flutist two flutes that look identical, one being a student flute and the other a beautiful instrument, you could tell which one is the better flute just by watching the person hold it. The same goes for a violin. You don't have to play a note. Just pick it up, feel the weight, feel the action, and you'll see immediately whether the person caresses it or merely tolerates it in their hands. Does the instrument show contempt for the hours spent learning to master it, or does it show respect and attempt to make every hour pay back with a factor of two instead of one?
In essence, beauty lies in the experience of the beholder. The experience is the composite takeaway from all the factors mentioned earlier. The challenge then is how to incorporate this understanding into our design process.
This brings us to the topic of design versus design language. There's often confusion between these two terms, not just in the Windows environment but in other environments as well. Words are important. If we want to change the culture of thought about experience design within our organizations, we need to start by developing a common vocabulary. Cultivating a particular kind of culture within an organization can be achieved through vocabulary and language.
Let's consider English. English is a language, but was it designed? No, it evolved. Now, let's consider C-sharp, a programming language. Was it designed? Yes, it was. It evolved from C, but it was an explicit reformulation. The lesson here is that languages can either evolve or be designed. Both are acceptable. The notion of a language being designed is something we understand. It doesn't have to be that way, but it can be.
In the realm of user experience, we often encounter various interaction languages. Some of these languages are meticulously designed from scratch, while others are assembled from existing elements, evolving over time. They are codified, or standardized, in a way that may seem less designed, but they are defined by this codification. This is the language we use, and this is how we use it.
Consider this: just because a statement is grammatically correct doesn't necessarily mean it's true. A statement can be well-formed, yet still be false, culturally inappropriate, or even a lie. The untruth of a statement can have consequences that extend beyond its immediate semantic context. This principle applies not only to everyday language, but also to design languages.
Take, for example, the sentence, "Yates was an Irish poet who won the Nobel Prize in 1923." This is a true statement, expressed in proper English. However, despite its truth and grammatical correctness, it is not poetry. It is merely a utilitarian statement of fact. This is similar to a lot of interaction design, where designs are often created within the vocabulary of a system.
Now, consider this excerpt from Yeats' "The Magi": "Now as at all times I can see in the mind's eye, in their stiff, painted clothes, the pale unsatisfied ones appear and disappear in the blue depth of the sky..." This is also English, and it is well-formed. But it goes beyond a mere factual sentence. It is poetry, filled with implicit truths that require deep analysis to fully understand.
Understanding the difference between beautifully articulated, factually useful information and poetry is crucial. If we don't understand this difference, how can we aspire to elevate our language beyond its utilitarian function? You can't create a beautiful app unless you understand these differences and know how to apply them.
The lesson here is that while one can write poetry in English, not all English is poetry. Furthermore, not all English is true. This is a key point to remember.
Consider another example from Yeats, this time in Irish. Even though it's not strictly English, it can still be considered great English language poetry. This shows that you don't have to be confined by the strict rules of a language. You can augment the language, inventing new words that fit the context and enhance the point you're trying to make. For instance, I often use the word "skeptimus," a term I invented to describe someone who is half optimist, half skeptic. Using such a word, even though it's not officially recognized, can make your point more memorable because it deviates from the usual language.
In any language, including interaction language, it's important not to take the guidelines too seriously. There's always room for flexibility and innovation. For instance, consider the concept of "trustification," a term I coined while working in telepresence and video conferencing.
Despite the initial skepticism towards video conferencing, with many claiming it didn't improve productivity, I argued that its value lay not in the bandwidth of communication or productivity, but in the ability to read body language and determine trust. It's about understanding the emotional state of the person on the other end of the line. If you fail to recognize this, you'll end up measuring the wrong things.
This brings us to the evolution of language. Just as people working in C started to build functions and realized they should be fundamental primitives in the language, we can also introduce new vocabulary and evolve the language. This is how progress happens. The words you invent can become part of the language almost instantly.
However, there's a caveat. When you introduce new words, initially people might not understand what you're talking about, which can be disruptive. But with the right context and body language, you can assert the meaning. The key is to ensure the meaning is consistent with the conversation at hand.
This brings me to an article I read recently, which criticized the use of skeuomorphism in design - the practice of making e-books look like paper, for example. The article argued that this was a misuse of new technologies. However, I believe this argument is flawed.
Yes, there's a risk of bad design if you take things too literally. You could end up designing an automobile that drives like a carriage. But it's important to differentiate between skill and what that skill is applied to. If you simply emulate what you were doing before with new technology, you risk falling into the trap that the article warns against.
However, the article fails to recognize that acquiring a new skill is extremely expensive. There's something called the power law of practice, which states that skill acquisition is a power function, not a linear one. To become highly skilled at something is really, really expensive. Therefore, it's not always wrong to use new technologies to utilize existing skills.
In our profession, we must be cautious not to design interfaces that require fundamentally new skills when existing skills could suffice. This is because we're imposing a cost of use on the system, which takes time and effort away from the primary purpose of using the system.
The design we create should aim to accelerate the process by which novices can perform like experts. This doesn't mean making you a better architect, for example, but rather making the interface more user-friendly.
The difference between emulation and innovation in this context is how literal the transfer of skills is. Skilled transfer, or taking an existing skill and applying it in a new context, is much cheaper because the only thing you have to learn is that the skill applies in this new context. This reduces the cost and accelerates the acquisition of expertise on the operational aspects of using your system.
If there's a one-to-one correspondence, the same skill for the same task, then we're in the realm of skeuomorphism. But if there is a gap, the wider the gap between the skill and where it's being applied, the more innovative it is.
Consider this example: everyone has a strong response compatibility between pressure and speed. If you've driven a car, you know that the harder you push, the faster you go. We've been able to put force-sensitive resistors under the left mouse button for pennies since 1982. Yet, we still can't point at the scroll arrow and push, and the harder we push, the faster it scrolls. This is absurd.
The reason this hasn't happened isn't because it's hard, but because we haven't realized that this was useful or that it was possible. This isn't skeuomorphism because we've never done that before. However, it builds upon a well-established existing skill and applies it in a new context, which could be considered an example of innovation.
The key is to always look for ways to save users from having to learn something new. There are times when it's worthwhile developing a totally new skill, but it's also important to leverage existing skills where possible.
The critical thing is to avoid getting sucked into debates and instead focus on understanding what's actually going on. It's a design problem that requires understanding the merits of both sides and figuring out on what it depends.
When we start looking at the interaction language of a system, we need to understand what makes it different. In music, there's a term for this: iron. All of you play an instrument, right? That's the essence of the language.
By the way, I'm a Doyle. My grandmother's a Doyle, so I have some legitimacy here. Passport. That's right.
When we examine the design of live tiles, we can see that they are more than just graphically altered icons. They may appear as simple squares on a grid, but there's more to them than meets the eye. The concept of live tiles introduces a new context to the use of icons, a concept we've been familiar with since the days of Dreyfus and his book on symbols.
However, the use of graphics in live tiles can sometimes be overemphasized. While graphics are important, they can also lead to confusion. For instance, when live tiles are filled with photographs, it can be difficult to quickly understand what each tile represents. The shape of the tile doesn't provide much information, and the graphic design of the tile needs to be distinct enough to stand out from the others.
Consider two tiles filled with photographs. One represents stocks, the other a newsfeed. Without any context, it's hard to tell them apart. The stock tile uses news images to reflect the state of the stock market, but without this knowledge, the images could be confusing.
There are tiles that use traditional iconography, reminiscent of Windows 3.0 or the Uri Macintosh. Some tiles are sparse, not fully filled, creating a pattern that catches the eye. These tiles are interesting and iconic, demonstrating the skill of the graphic artist while still maintaining the live tile concept.
The design constraints we work with are not dictated by the design language of Microsoft or Apple, but by the limitations and potential of the users' motor, sensory, and cognitive skills. The design process starts with the design language, but it doesn't end there. The final product needs to make sense to the user.
It's important to differentiate between a design and a design language. Having a consistent design language across products doesn't necessarily mean the design is good. Just as using correct English doesn't guarantee the truth or quality of a statement, conforming to a design language doesn't guarantee a well-designed product.
My composition teacher in music school taught me that you need to learn the rules before you can break them. This applies to design as well. Most rule-breaking is done unintentionally, leading to unpredictable and uncontrollable outcomes. Understanding the rules gives you options and allows you to make informed design choices.
So, how do we know if a design language is well-defined enough to allow for good design? I would argue that a design language should never be fully defined, as this would limit creativity and make the design process overly complicated. When the Macintosh first came out, its interaction design language was largely informed by what was going on at PARC and with the Parc Star. This allowed for a degree of freedom and creativity in the design process.
The original Macintosh team didn't copy verbatim or use universal operators, but there were many similarities with previous systems. However, the user interface guidelines book wasn't published until after the Mac was released. The team did two significant things.
Firstly, they built a toolkit, known as Mac App, and other development tools, to exploit the path of least resistance. This means that they designed their tools, libraries, and documentation to make it easier for developers to follow a preferred path. The idea was to make it easier to do things the right way, rather than the wrong way, and thus guide developers down a preferred path.
Secondly, they developed applications like Mac App, Mac Paint, and Mac Write. These applications were somewhat useless at the beginning, but they served as templates for developers.
I applied this approach when I developed a product called the Portfolio Wallet. This product was designed for automotive studios and had to be a walk-up and use interface. We built a behavioural prototype that behaved with the flow and action we wanted, even though the code behind it was unusable.
We also gave a specification in human terms. The specification was that any employee of General Motors, from the president to the janitor, had to be able to learn 80% of the functionality in 3-5 minutes with a 90% retention a week later. If this wasn't achieved, it was considered a stop-ship bug, even if every line of code was correct and every function was implemented.
This approach changed behaviors. It wasn't about ticking off a list of features. Developers are ambitious and creative, and if you reward them for adding features, they will add more features that you never asked for. They will design these features themselves and they will appear in the code because they have the power to do it.
Developers are very good at developing features, but providing access to those features through the interface is the challenge. If the person developing the access doesn't fully understand the features, they can't develop the access to the features.
In my experience, if you study computer science, you want to assert your expertise and creativity by how elegantly you can program. The designer should work closely with the developer, like a structural engineer and an architect, to create the best product.
When you set the specification as a challenge, engineers want to beat the spec. In the case of the Portfolio Wallet, the engineers did everything I asked for and more. They proved their skills by recognizing relationships, eliminating complexity, and beating the spec.
The language you use, the examples you provide, and what you consider a spec are all crucial. This approach may not work for something on the scale of Windows, but it worked for Sketchbook Pro, a product developed by my team.
On July 1st, we embarked on a mission to create a product for the launch of the tablet PC. We had a tight deadline, aiming to ship by November 7th of the same year. This meant we had to go from having no team to shipping a product in just over four months. By October 1st, we had the product ready, thanks to a focused approach and the use of prototypes. This process was unlike the one used for products like Office or Windows, but it was similar to the process used for apps on systems like the tablet PC.
One of the key strategies we used to speed up the process was to draw from our past experiences. We didn't start from scratch. We realized that we could learn a lot from the use of live tiles in film. The first use of live tiles was in the 1966 film Grand Prix, where the screen was split into different tiles. This technique was later used in Expo 67, the Montreal World's Fair, where it was used to tell stories and convey messages effectively.
In our profession, stepping out of our comfort zone and exploring other areas can be both entertaining and enlightening. For instance, watching films that use live tiles can be a great way to learn and get up to speed. If I were developing apps, I would probably organize film screenings on Friday afternoons or encourage everyone to watch as many films as possible.
One of the films I would recommend is the first Thomas Crown Affair, directed by Norman Juisson. Juisson was inspired by the films he saw at Expo 67 and used the same technique of splitting the screen into different tiles. This was done using optical printing, long before digital technology was available.
Looking at these live tiles in the context of visual language, I realize that there's nothing new under the sun. Everything is just a variation of something older. By going back to these older techniques and learning from them, we can broaden our experience and bring more to the table.
This is just one example. There are many other aspects, like touch and gesture, that we can learn from. The internet is full of examples if we know where to look. For me, eBay is a valuable development tool. I have a large collection of gadgets that I bought for a small price. Each one is a working prototype that I can pass around for people to experience and learn from.
In conclusion, there's a lot we can do if we're willing to learn from the past and adapt to the present. Just like in music, where you have to change the arrangement of a piece to suit different instruments, we have to adapt our techniques to suit different platforms. The key is to understand the nuances of each platform and to pay attention to the details.
The stylus presents a significant opportunity in the realm of technology. While I am a proponent of touch and multi-touch technology, having published the first paper on multi-touch in 1985, I am aware that it is not the solution for every situation. Each technology has its strengths and weaknesses, and the idea that one interaction technique can cater to all needs is simply not true.
Consider the Windows 8 style, formerly known as Metro, and its application on mobile phones. If you were to use the touch and tiles interface while driving, it would be a dangerous distraction. That's why we developed Sync, a system that changes the interaction language to speech activation when you're driving.
Imagine being on a call, receiving a text that is read out to you by your car, and then responding to it, all without touching your phone. When you park and leave your car, the call is seamlessly transferred to your handset, and the interaction changes back to touch and tiles. This seamless transition, despite a 90% change in the hardware supporting the call, allows for a consistent conversation.
The key is not consistency in user interface, but rather in the conversation. The interaction style may need to change completely and seamlessly depending on the situation, especially when using a slate that isn't anchored by a large processor and plugged into a wall.
Now, let's talk about the stylus. I want to show you an example of a product that was released around the same time as Windows 3.0. This product, Wang Freestyle, allowed you to annotate any document with your pen, record your voice, and synchronize your voice with what you're pointing at and marking. You could then send this as an email.
Imagine being able to review a business plan, expense report, novel, or artwork, and send your comments as if you were looking over the recipient's shoulder, even if they're across the continent. This capability is still not available today.
For business purposes, the ability to annotate existing documents, images, videos, spreadsheets, novels, artwork, and more, and then send it, could be incredibly valuable. This would require minimal bandwidth, as you're only sending a still image, mouse data over time, and speech.
Imagine being able to review and mark up reports from the comfort of your couch, just like you would with paper, instead of having to print everything out. The tools for this have been available since 1990, yet we still can't do it, despite having about 100,000 times the computing power today.
This is an example of how we can learn from the past. Many technologies failed not because they were bad ideas, but because the timing wasn't right. I believe that a product like Wang Freestyle could be successful today.
Imagine this: you're on a ski trip and you've just come down a thrilling slope. You take out your Windows slate and snap a picture of the hill. You mark up the photo, indicating where you started, the path you took, where you tumbled, and where your friend Buxton nearly went over a cliff. You then narrate your adventure, pointing out the marked spots on the photo, and send it off as a rich image via email.
Now, developers, wouldn't it be great to have a tool that could do this? And wouldn't you use such a tool if it were available? This is just one of the many ideas I've come up with.
But let's talk about patents. They last for 20 years. So, if a patent was filed in 1990, it would have expired in 2010. Yet, no one has capitalized on this expired patent. I don't mean to trivialize the issue of patents. I've worked as an expert witness on a case and have seen how complex patent law can be. But as a developer, you shouldn't be worrying about that. Your focus should be on getting your ideas out there.
It's all about cost-benefit analysis. If your idea is valuable enough, it's worth the legal costs. If it's marginal, then it's not worth the effort. But you need to delve deep enough to determine the value of your idea. That's where sketching techniques come in handy. They allow you to explore different possibilities without spending too much.
Remember, it takes 20 years for an idea to go from inception to a billion-dollar industry. Most of that time, the idea is under the radar. Any idea that will become a billion-dollar industry in the next 10 years is already 10 years old.
Now, let's talk about a concept I've worked on extensively: the radial menu, also known as the marking menu. It's a pop-up menu where you make a choice based on the direction you go, not the distance. No matter how many times you've used a file menu, you'll never be able to differentiate between 'save' and 'save as' or 'print' without looking. But with a radial menu, you can easily select options in the same way you can point to the eight points of the compass without looking.
This type of interaction is especially useful in touch and pen-based devices, which are becoming more prevalent. It's a simple pop-up menu, but the selection is made by direction, making it more intuitive and user-friendly.
Every time you make a selection, you're training your motor memory to learn the gesture. The key is to delay the pop-up menu. If the menu pops up immediately, people will read it and it will slow them down. However, if there's a slight delay, you have to push and hold, and then the menu comes up for you to make your selection.
Once you've learned the gestures, you can simply perform the stroke and it becomes a gestural interface. This accelerates the path for a novice to behave like an expert, much like training wheels. The menu is familiar, but it's tailored to make the selection based on your biological, neurological, and cognitive wiring, unlike any other linear menu.
This allows for a seamless, smooth transition to start using gestures, depending on what you frequently use. For example, when you stroke to the left and the right to switch forward and backward on images, you're using this kind of menu. But because you've memorized it due to good stimulus response compatibility, the menu never even appears.
This is a radial menu, where you've mapped west to go backwards and east to go forwards in the slides. It can also be hierarchical. For instance, if you're choosing groceries, you go up to groceries, then when you're getting groceries, you select meat, bread, staples, miscellaneous, or junk. If you know where the fruit and vegetables are, you can simply do the stroke and the menu never appears.
This can be done with your finger, a stylus, or a mouse, and it works on every platform. It's a speed improvement of almost ten times compared to pop-up menus.
For example, in Windows 8, if you want to type an "O", you can simply tap it. But if you want to do accents, this is a radial menu. If you go up to the north, you get an accenting, to the left, you get a circumflex, and to the left down, you get a tilde.
Most people don't know that this even exists, but it does. It's right there on the keyboard. Even if it didn't exist, you could invent it. You can augment the language and make it a larger part of the design.
There's also a hot virtual keyboard that uses the same thing. If you set it up right, if you do an upward stroke, you get the uppercase, so you don't have to use the shift key. If you stroke to the left anywhere, you get backspace. If you stroke to the right, you get space. If you stroke down to the left, you get return. This can save you time and improve your text entry speed by about 15%.
These are just some examples of how you can incorporate these features into your designs. It's all about paying attention to the details. For instance, in a sketchbook, you can draw with pretty nice ink. You can come to your palette on the Lagoon, and the menu just comes like that. So red's to the right, black's up. You can easily switch between black and red without having to look.
In this lecture, we will discuss the functionality of a pen-centric application that has been designed to work seamlessly with touch devices like the iPhone and iPod Touch. The application's ability to respond to pen pressure and control makes it unique and different from other touch-based applications.
One of the key features of this application is its ability to mimic the physical properties of paint and paper. For instance, when you apply paint to the virtual paper, you can see the texture of the paper. You can also mix different colors of paint, and the application will blend them as if they were real paint. This is achieved by using a blending brush tool, which doesn't leave any paint but allows you to smear the existing paint on the canvas.
The application also allows you to choose the materials of the paint and pastels. However, the level of control you have over these features depends on whether you're using your finger or a stylus. While the application works with your finger, it's hard to control the details. But with a stylus, you can control the pressure and create variations in lines.
It's important to note that capacitive pens, which are often used on phones and capacitive screens, are not styluses. They are simply proxies for your finger, and the application doesn't recognize them as a stylus. This means you can't assign a different function to them.
The application also allows you to use two hands to scale the canvas. If you bring your pen down, you can ink, and if you bring your finger down, it lays down paint. However, if the application can distinguish between a finger and a pen, you could assign different functions to each. For instance, you could use a single finger for the smear command and the paintbrush to lay down paint.
This application is a great example of how pen and touch can work together. It provides a wonderful example of graceful degradation, where you can drop tools to the side but still maintain functionality. It's not about choosing between pen or touch, but about how they can work together to create a seamless user experience.
In conclusion, the beauty of this application lies in its ability to provide a multi-sensory experience. It doesn't rely solely on haptics for feedback but creatively uses non-speech audio to give users a sense of tactile feel. It also allows for the subtle use of pen pressure and sets reasonable defaults when you're just using your finger.
The application also has great speech recognition, allowing you to change tools and colours without any interface on the screen. This leaves the entire canvas available for working. The key takeaway here is not to eliminate any sensory modalities of the human experience, but to consider them all and then explicitly eliminate those that are not necessary.
Consider the value of having everything available when it might be beneficial. You don't need to include all features in the first release. Instead, plan for two or three releases ahead, giving yourself a path to evolve and incrementally improve. This approach allows you to prioritize, as you need to generate revenue to fund research and development for the next generation.
Don't limit your thinking to just one product ahead. Build in the capacity to incorporate future developments. If you're thinking long-term, consider how the market has evolved. For instance, the iPhone was released in 2007, and handheld devices have since transformed our business. Now, we're on the verge of slates coming into their own. In five years, we can expect even smaller devices to become prevalent.
I've been working with Jeff Hahn from Perceptive Pixel on wall-mounted displays and how these devices interact. If you want your business to last, consider the ecosystem in which your products will exist.
I brought the 2002 version of Alias Sketchbook Pro to demonstrate that a product created by a team of five people in five months is still relevant today. Your products should be designed to remain relevant in the ecosystem you anticipate will exist in five to ten years. Plan for success and longevity, not obsolescence. Build in the capacity to evolve and bring your customers along with you.
Understanding the fine details, such as the difference between a pen or even what the difference is amongst pens, is crucial. When I buy a slate, I consider whether it has the N-Trig digitizer and stylus or the Wacom stylus. They're different, with subtle differences, much like buying a Yamaha saxophone. The artistry lies in exploiting these properties.
In projects, we use personas, but it's important to remember that you can't rely solely on user groups or focus groups for design. Steve Jobs once told me that he doesn't talk to users, he talks to markets. There's a larger audience there.
You need to understand both markets and individuals. It would be unwise to generalize based on analysing a single individual or a small group. Personas have some value, but it's also important to consider the context in which a product is used.
One of my favourite design books is Henry Dreyfuss's "Designing for People." He was the first person to write about personas, long before Alan Cooper. I highly recommend getting the first edition of this book for its beautiful typography and paper.
In addition to personas, consider the concept of "placona," a term I coined. It refers to the notion of situated computation, where the design is influenced by the social, physical, and cultural context in which it is used. This affects the design constraints. So, when planning a product, consider not only the persona but also the placona.
The crux of my argument is that the significant shift in our world is not due to technology. Yes, technology has become smaller, faster, cheaper, and more abundant, but it's essentially the same as it was in the 80s. The real change lies in the human, cultural, and social aspects of our lives. The key questions are: who is doing what, where, when, why, for how much, and with whom?
This perspective shifts the focus from a technocentric approach to a more human-centric one. It's all about transitions and changes. During any transaction, the people involved change, the physical location changes, the technologies change, and the ecosystem changes. We need to adapt to these changes.
I am delighted to have had the opportunity to share these thoughts with you. I look forward to returning and continuing this discussion. For those interested in delving deeper into these topics, my website is a treasure trove of information. I also have a YouTube channel, WAS Buxton, where you can find a plethora of videos, including the one I showed today. Feel free to explore these resources and reach out to me if you need further assistance. Thank you for your time and attention.