Secrets of Science: It’s Life Jim But Not as We know It

Written by Martin Lucas

Star Trek’s mission was to boldly go where no man (or woman) had gone before; in other words, they had to push the boundaries of what was known about their universe. It’s an interesting concept and obviously helped sell an awesome show.

What I’m curious about is whether good old planet Earth is built to follow the same kind of mission?

Is our society run in a way that pushes boundaries?

Do we give people permission to think beyond their role?

Where is our collective psyche heading for the future?

I find that any kind of thinking like this is immediately faced with defensive responses and while I don’t dispute that we have innovators in the world, I do counter that technology is skewing our perception about the pace of innovation and indeed the value of it. You can go all the way back in time to the ancient Greek philosophers and find similar debates about the world being in a constant state of flux; this is how it’s always been but collectively we always think our present timeline is unique. It is certainly different. I’ve written before about the Strauss-Howe Science model that claims the biggest waves of change happen once every four generations; we are in the beginning of that wave of change.

So What?

Change is a scary proposition for us humans; we don’t like it as a general concept because we don’t know what it means for our life: how will it affect me - will I be safe? These are the kinds of feelings and emotions that change generates inside us. My life, my brain, my passion is always focused on mass behaviour change; the weird thing about being a math savant is, well, it’s weird! I can process mass amounts of data about people, their feelings and their behaviours; it makes my ability to predict the future more plausible. I mention this because there are a number of things about to clash together that when measured in isolation are fine, but when they collide there is some risky stuff emerging.

Just Because We Can

I am a big advocate of innovation (I’d have a pretty shitty business if I didn’t believe in pushing the boundaries of how we understand the science of people; that’s why we exist as a company), but where do we draw the line? And better yet, is there actually anyone who could draw said line when problems emerge?

Here are my main contenders that will create a cause and effect challenge:

A.I. (Machine Learning)

The Singularity


An I.Q. Modelled World

I will break each one down, examining the risks unmonitored change can bring.

A.I. (Machine Learning)

I freaking love A.I. For context, we have several inventions that redefine the menu experience of online shopping. One of these looks at online and offline behaviours and how to give people more of what they want and less of what they don’t want, personalised right down to how you can treat each person as the unique individual they are. Another of these reverse engineers how platforms treat their users (customers) like numbers, not people. So yes, it’s fair to say that I love A.I. At the end of the day, it’s teaching a machine to learn a way of dealing with a set of variables. In essence, it's like teaching them free speech so they understand a variety of ways to deal with a variety of questions and needs from humans.

Simple enough, right? Now, the inventions I mentioned are much more complex but are still just magnified versions of the same idea AND I will code none of it BUT I could if I wanted to -  therein lies the challenge. As a race, humans cannot control who has access to code and we cannot police the unknown. To draw a recent example, no one foresaw how Google would have control of the world's information or that Facebook would have the world hooked on dopamine hits. Nor could we see that both would use their power to sell adverts and diminish the free ways that people can share content (which is what both of their algorithms do). Are they doing anything wrong? No. It’s what I call the grey face of humanity; what they are doing is frowned upon but every nation knows they wield more power than any government does. Money rules our world today. How can we know that by teaching a machine to do something ‘grey-faced’, it won’t blow up in our faces? We can’t always predict what will happen, which is the key issue. It only takes one human to make one mistake and we could be screwed. That one human may not be the smartest person in the world or they may be a creative genius, and yet they are still human and we all make mistakes….  No one polices code.

The Singularity

There is a great book on this subject called ‘The Singularity is Near’ which looks at advances in chemical science, code and biotechnology. It’s basically saying we can take code and then create some chips and bots and fix anything. This is already emerging in medical fields with partial limb replacements and 3D printing synthetic stuff. My challenge here is, do we end up with Gattaca? Do you remember the movie?


If we do, I can’t see this manifesting in anything but services for rich people, resulting in a difference between classes of humans, between microchipped, Limitless (not really a thing)  genetically modified people and the masses remaining just as we are today. That’s pretty realistic in terms of what could happen and also very scary; you can draw a line to medical insurance, donors and human rights injustices, employment law abuse, where the world is awash with a lack of justice unless you have money in the bank or are a corporation or both. That’s how our world has changed in the past 100 years; laws and bylaws all created by people serving those powerful groups. It’s just how things are, so where do we draw the line? No one policies which services should be charged for or not, let alone whether they should even be created or not.


Do you fear death? It’s the one unbeatable certainty in the world which many people fear. Many people have tried to find a way to beat it. Do you want to go all Futurama?


It’s an uncertainty that hangs over us all with safety, our health, my amazing skill to almost get hit by a vehicle once a week (every week I do this). It’s ever-present. We all try not to think about it; what’s the point of wasting time wondering about it, right? Agreed. Now, imagine you are a billionaire and you have everything you need. You’re sitting at your desk in your office at home, you can see your yacht out of one window, your fleet of cars out another window and you look at the walls of your office and see all those celebrity, business and political friends you have. Life is good, but you fear death. You wonder if you can cheat it, so you decide you will devote millions to that purpose. Shit just got real! Right, you’d use A.I., the singularity, chemists, biologists. You’d explore it all, and damn…. No one can police what a billionaire gets up to.

An I.Q.-Modelled World

104 years ago, a dude wrote a book and people liked it (a lot) so we built our education, business, lives and world around the concept of language and math. As we are now a century further forward and are beginning to understand how the brain actually works and how we all learn in different ways (like this example of Autism), not just math and language. We are beginning to accept that how we think, how we talk, how we read situations, how our imagination works is all a part of EQ (Emotional Intelligence). EQ is a more dominant intelligence than IQ, but to embrace this would mean way too much administrational change. This is not the type of change to boundaries we readily accept, is it? Yet, within 20 years (Per Dan Pink’s A Whole New Mind) A.I. is going to replace at least 75% of the jobs (as we know them today). For one, there will be fewer lawyers because of concepts like Ross Intelligence. Anything that is more book learning or jobs that are mainly ‘read, rinse and repeat’ will die off. What’s left is the human-to-human stuff like service, sales, marketing, philosophy - stuff where you have to think on your feet and deal with real life humans. For everything else, we can teach a machine to learn and do it… but we may not need human-to-human skills soon and no one can police that change.

As individual elements, they are all risks of their own; as a collective, we have a BIG problem. The final part is sitting right in front of our faces - a behaviour we all do, we all encourage. We all hunt it; we all expect it.

Metric-Based Society

What we could not have predicted would result from an IQ-modelled world was how it would intertwine with capitalism and be abused under the guise of productivity. Everyone has a target, everyone has numbers to hit, everyone has combinations of shareholder value, feeding their family, needing a new car or simply keeping their job which justifies their behaviours.  Remember the grey face of humanity reference? Well, if everyone is doing it, then what do I have to lose? If I don’t do it, someone else will. I don’t want to look foolish!

The cause and effect is that, in a very ironic way, our society behaves with no boundaries to our ethics but only within the boxes of our role. Everything is about productivity and defending your job / lair, which creates a society scared to challenge their bosses, scared to challenge something that doesn’t feel right. If people do things because they have to hit their numbers and their survival instinct is threatened even while at work, then we have a society focused only on hitting targets, all to fulfil money-led behaviours.

Many of you will say, ‘well, I don’t do bad things’, or ‘my company is good’. This is a very fair reaction, but what happens when you are under pressure or aren’t doing as well as normal? When temptation comes knocking and you know it is unlikely anyone will find out, what do you do then? The truth is that humanity is grey-faced as a collective. I don’t see the world as this is good or this is bad, it’s just somewhere in the middle and it’s funny but seeing an act of kindness jolts you; why is that? Because it’s not normal to see good, just like it is not normal to see bad. We all live in the middle.

When pushed, we ourselves push boundaries, but not like the Star Trek mission. It’s more the questionable ethics and behaviours we can’t police. Do you recall Champex, the anti-smoking medication? It blocks the addiction receptor of Nicotine in the brain; who’d have thought that was even possible? They jacked up their prices once it started working, and that’s why it is not prescribed by doctors anymore. Not because of the depression links, not because it doesn’t work; it’s just a money thing. That is one example of what happens in a capitalist metric-based society, but what does that mean for tomorrow?

The Decision Game

It is 2027: A.I., The Singularity, Death, an I.Q.-modelled world being run as a metric based society have all joined together.

Go back inside your mind, try thinking as a billionaire again. You are very aware of all the advances of the past ten years; you have a company earning a fortune, but your biggest overhead is still people and they are just so pesky and needy and expensive! You have an idea!

You think, ‘I wonder if we could put a chip in people’s heads to block independent thoughts? Heck, we could even clone them and use this chip so I’d own them; they’d only do what I need them to do, just like A.I. but more human-to-human skills, which is why I need them. I’d have my own Matrix! Hmmm…’

Would you make the practical choice? Wouldn’t humans be cheaper and easier if you could control them...

...Humans who do but don’t question things.

...Humans who don’t need.

...Humans who don’t have civil rights.

...Humans I don’t have to pretend I care about.

...Humans who make me money in my sleep and I pay in little cubes of protein.

It’s your decision, what would you do?

Back where we started: Star Trek’s mission was to boldly go where no man (or woman) had gone before; in other words, they had to push the boundaries of what was known about their universe. It’s an interesting concept….