It's easy for people to understand robots as a personification of artificial intelligence and the ultimate interface between humans and computers. After all, we can see them, we can touch them, and they look a lot like what we already know and understand. With that image of an advanced future, we can resolutely say that we are not there yet. We don't have robots living alongside humans, despite what our past selves imagined and predicted. But looks can be deceiving. The absence of physical robots doesn't mean they aren't there -- embedded in deep ways in the environments where we spend the most time and wedged sometimes between our human relationships.
Our world has been cloned, re-imagined, and modeled into an online environment. Software developers have operationalized interactions into likes, emotions into emojis, and fleeting thoughts in "tweets," and the like. While imperfect, this has created a virtual universe in which we all have an identity, in which we live, learn, work and engage, for substantial hours in the day.
In that world, robots are actually quite ubiquitous. These robots don't have hands and feet and faces, but they are very real and very present. What we're talking about are adaptive algorithms that incorporate elements of learning and AI. More specifically, those algorithms that determine what we see, hear, and notice in our social media, news, work, and other semi-random information feeds. When we acknowledge this new world, and conceptualize the online space in this way, we can be mindful of the ways in which it can affect the way we learn about and perceive the world and the topics that we end up prioritizing. Still one step further, if we begin to think about the goals of each online space, we can begin to understand the likely preferences of the adaptive algorithm and we might even just try and tame it to help us find what we're looking for.
Most of these online spaces have a basic mandate that starts with getting noticed and getting users. Next, they often encourage relationships and connections between users. Finally, they encourage users to spend as much time engaging through the platform. It also virtualizes our existence and gives the data-hungry algorithms the nutrients they need to thrive and do what they do best. Various creative business models go one step further and monetize the insights that the software delivers. We can't know for certain what the true goals of each online space is, and even if we figure it out, it can change. But we can challenge ourselves to make an educated guess based on the apparent business model and with a realization of the somewhat universal economic pushes and pulls that all businesses face. Equipped with this wisdom, we can better weigh what we're being shown and can maybe even help us leverage the idiosyncrasies of the postulated algorithm to deliver to us more of what we want to see and learn. Of course, we can literally search for what we're looking for using a search box, but sometimes tacit interaction can be just as good if not better.
Conceptualizing these algorithms in this fashion helps us come closer to making tangible the intangible, and we start to see the "robots" even though they are really just embedded in code and exist only in a relational manner. Now we might want to try and train it to serve us things we want to see. Instead of glossing over irrelevant ads or posts, we can actively dislike them (if the feature exists) or mark them as irrelevant or not interesting. Maybe we want to get the chatter or pulse in a specific subject area, we can follow thought leaders in that area and create sub-feeds or lists if the feature exists on the platform. Some of these algorithms may also be more sophisticated than simply tallying likes. They might care how much time we spend as we're scrolling through a feed of images or how long we keep our screen on a given post. In this case, if we like a subject, we might leave our screen on a relevant part of our newsfeed and see if it does anything. In this way, we're tacitly training the algorithm to serve up what we want to learn and see more of.
Since it is hard to know what exactly the algorithm is doing and how it is gathering information and learning, it can be fun to play with it and use trial and error to see how it changes its behavior. Consider testing it out, pick an inane thing that you don't usually care about that randomly appeared in your broader or global feeds, and start to spend time engaging with that content directly and tacitly. Come back and see if you can spot the same thing again, spend a bit more time. With each iteration, actively look for that topic and try to engage with it (short of sharing it). If your feed starts to morph, and you start to see a surprising number of posts on that topic, congratulations, you've met your first robot in action. Maybe we are in the future we imagined and they actually are everywhere.
Comments
Post a Comment
Thank you for sharing your thoughts. We will review and share if we deem that it is legitimate comment that meets our constructive community discussion standards.