“Although the chips are biologically safe, the data they generate can show how often employees come to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, people cannot easily separate themselves from the chips.”
This is a questionable case. I wonder if it is legal, because there is a law in most countries about ‘integrity of the body’. And here even the integrity of the whole person is at stake.
In some cases I can imagine that a chip in the brain would work well, for example if someone has schizophrenia or a severe depression. But even then, it should be applied by a doctor and with a consent of a medical commission.
Although, before they decide to use this, we all should think more of the horrible outcome of such chips in bodies. Is it possible to make a person more aggressive with a chip, in order to make him/her a warrior? As for now I am absolutely against it, not in the least because it can be used to suppress other people.
It could even suppress the person with the chip without their knowing with the kind of “chilling effects.” I find the people they quote in the article saying “I like to try new things” and “I want to be part of the future” infuriating. What a privileged point of view. What about the employees who don’t want to be chipped and will feel coerced to do so because others have it?
@laura thanks for posting this. Here’s my take on this and a question.
I’m involved in the biohacking community for more than 5 years now and while I don’t know any of the people referenced in the article - I do now the Swedish biohacker community with people such as Hannes Sjoblad who is driving this. I’ve implanted both a chip (made by Amal Graafstra from DangerousThings.com) and a magnet.
I would say many of the people who’ve implanted this chip have been curious, rather than extremely thoughtful about the possible first- or second-order effects. The reason for getting the chip was quite similar to the reason why I’ve been very active in the early days of the Quantified Self community: experiencing technologies in a very early stage to better understander their implications.
There’s an interest in experiencing these technologies which makes you more aware of either adverse or positive effects. And yes, that is an extremely privileged position I am aware. But in my opinion that doesn’t necessarily make it wrong to participate in some of these actively. That said, I don’t so much sympathize with the people in this article who are more focused on the hype part than e.g. discussing implications or stimulating debate. I do believe that trying out these technologies (not just to ‘try out new things’ but to undergo and analyze these technologies to create an experienced opinion) is something else.
I’m wondering what your take on this would be - is the specific attitude of those interviewed that infuriates you or the general idea of experiencing/participating in technologies that can be potentially harmful?
 Note: there’s a large difference between the very diverse and thoughtful community of Quantified Self and the hype-focused trend of quantified self)
In short: that specific attitude. I was an early adopter when it came to a lot of health and fitness tracking (I think I tried all the first-generation mainstream fitness trackers.) So my position doesn’t come from being a luddite, but this has made me think about the edges here.
When it comes to biohacking, I might be tempted myself, if I had a bit more programming and security knowledge in that area. You should be able to do whatever you want with your body. But data from implants should not be shared with larger organisations, be it the corporation that collects data from the device, or the employer who encourages you to get implants so they can track you.
It’s a nuanced question: should we not try any technology that could be potentially harmful? If it’s a technology that is only going to affect us as individuals, then maybe that’s our decision to make. But as people who are early adopters and advocates for new technology, we need to look beyond the exciting and shiny into the real-world societal implications. Anyone with technical knowledge should be questioning the use of any new technology, as those profiting from it usually aren’t.
This kind of technology certainly shouldn’t be piloted in the workplace. By all means give people free implants and the ability to track their own working habits, keeping ownership and control of their data. But sharing that data with the employer is clearly an irresponsible use of technology? Am I alone in thinking this?
Thanks @laura for diving a bit more in this - I appreciate the discussion. There is a clear reason and need for more people advocating the implications of technology (“it’s not neutral”) and what we can all do in our everyday lives. This often means balancing a fine line and sometime taken clear positions on either side of the aisle to try to “balance out the other side”. Regarding sharing your data with your employer is something I would certainly argue against (“no, unless”) as well that I know cases of employers that are undertaking efforts together with employees to sincerely understand the benefits and consequences with the right amount of transparency.
Early adaptors should understand the weight their role brings - which is often not the case (I also considered I could have done better in many instances). That said, my experience with early-adaptor groups has been largely positive. I’ve been part of the second wave of internet enthusiasts coming online in the mid-90s and have enjoyed the large technical curiosity of many I met on IRC, bulletin boards and newsgroups. In later years (mid-00s) I became involved in lots of activities we’d now call “digital health”. The Quantified Self community is one such example that has been working towards trying to strike a balance between technical and social opportunities and consequences. The “big tent” philosophy always allowed a very diverse group to congregate - including social scientists, philosopher, security researchers, patients and early adopters. Such movements could also benefit other sectors and - in my opinion - deserve more attention (although that is often difficult as they thrive much more outside the spotlight, in more “academic-type” environments).
The need for clearer positions from parties involved (users, but even more builders) about what they would or would not do is becoming increasingly important. It’s therefore that I wholeheartedly support ideas such as the Ethical Design Manifesto. Such practices are in different ways already happening in areas, such as medical conferences where there are Patient Included manifestos about need for diverse participation.
On another note - there was another part of your reply that struck me in your first reaction: “What a privileged point of view”. This morning I read Brian Chen’s piece in the NY Times on VPNs (https://www.nytimes.com/2017/04/05/technology/personaltech/vpn-internet-security.html?_r=0) and it struck me that the current notion of “use a VPN” in many privacy-aware circles could also be considered extremely privileged. You can have privacy if you pay for it. Of course many are at the same time also rallying for better regulations, but for me it made me think how difficult it is to balance a possibly privileged point of view with the real intention of doing good. Work in progress.
Yet, I do not see an answer to the question of Laura. Of course you can experiment on yourself if you like that. But I have even more doubts about the intentions of this project if they call people who do not accept this ‘Luddites’. This is very different from the Industrial Revolution. This is about potentially offending human rights. A body belongs to a person only and not to others. This is about the wish to monitor and controll others, to have power over them… The IT-revolution is far more intrusive towards the human personality than the Industrial Revolution was towards labour, although some nice people had to fight too for workers rights in those days, to regulate this transition.
Now we are getting a diversion between IT-people and the non-technical people and the technicians are getting more and more power over the others. Besides this: money does it all. Richer people can afford privacy, poorer ( and often less educated) acccept everything but are becoming victims of ‘surveillance capitalism’, vulnerable for hacking, for prejudice, etc.
Ergo, I would like to see this discussed by many more people, including politicians.
Which specific question are you referring to Klaske?
I’ll respond in more detail in a while. But just thought it was worth quickly saying that I’m the one that used the term “luddite.” And it was meant to be tongue in cheek…
I referred to this: "What about the employees who don’t want to be chipped and will feel coerced to do so because others have it?"
A situation at the entrance of a dance festival, where you can show your chipped arm to get entrance, is different from a workplace. The article is about employees and so was the question of Laura. In a relation between employee and management there is no real free choice. Even if the boss says that these chips are voluntary, there will be pression upon every employee to join in. Companies who want more controll over their employees will prefer an employee with chip to someone without.
I am thinking of article 3 and 23 of the Declaration of Human Rights too.
Again, if you want to experiment, go ahead, but in a workplace, no, this needs to be discussed by many more people than tecnicians alone.
Times are changing, but human beings did not change much since about 4000 years. Our feelings are still the same, our reactions, our perception about favourable and unfavourable behaviour ( good and evil) only changed a little. Human beings could possibly change by implanting chips, by singularity, by trying to make them into cyborgs like Elon Musk and many more Silicon Valley people want to, But this can only be done by a powerful someone or a powerful organisation, only by suppression of ordinary people. The movement that US companies like Google are trying to implement will lead to suppression of many people. Over us, for us, but not by us.
I was an early adopter, the first in the family, and now I am skeptical about everyone who talks about ‘global’, ‘the whole world’, ‘benefit of all’, etc. There is too much aggressiveness in this kind of talk, too much propaganda. And money, of course, because we all know that Google is the one who benefited most of its implementations.
“The smallest minority on earth is the individual. Those who deny individual rights, cannot claim to be defenders of minorities.”
~ Ayn Rand. “The Virtue of Selfishness: A New Concept of Egoism.”
As long as we continue to view everything as a thing we will be no more than that. We place technology in an adversarial relationship to ourselves thus inviting our own destruction. We want to be cool, we want to be in, we want to experience this destruction first hand, leaving ourselves behind.