Friday, January 11, 2019

B1 - Facebook, Fear, and the "Black Box" of AI


In the age of smartphones, automated vehicles, and even microwaves that connect to the internet, it is almost natural to get caught up in buzzwords and popular terminology such as AI, Internet of Things, Machine Learning, etc. While these terms likely mean little to many Americans, it is becoming rare to meet someone who does not own a smartphone which inherently utilizes software that incorporates the buzzwords that mean so little to those people and simultaneously allow smartphones to perform to their full potential such as mapping services like Google Maps or voice-assist capabilities like Siri, Cortana, and Google Assistant.

With such advanced capabilities in a constantly expanding and ever accelerating field, it is fair to question or even be concerned by what is actually taking place. Of course, major leaks in information such as Facebook’s data leak to political consultant Cambridge Analytica, which resulted in the improper data leakage of approximately 87 million users to a company employed by a 2016 United States presidential candidate, only add fuel to the fire of paranoia and fear among the public regarding privacy, data, and shared information. In episode #109 (entitled: Is Facebook Spying on You? [1] ) of the Gimlet Media podcast about the internet, Reply All, hosts Alex Goldman and PJ Vogt discuss the daunting reality of how much information websites like Facebook actually have on their users. The basic thesis of the episode is that, while to many users of Facebook it may seem like the social media platform listens to users through their smartphone microphones, in turn allowing them to place timely and well-targeted ads based on user’s recent interests, discussions, or purchases, it simply doesn’t need to based on all the information they already gather through other means. One basic example of this is a feature called Facebook Pixel, which is basically a way that Facebook can track what its users are doing all over the internet, even when not on Facebook. In the words of host Alex Goldman:
[Facebook Pixel is] installed on millions of websites. So when you go to one of these sites with Facebook Pixel on it, it watches what you do and reports that information back to Facebook. It can see how long you linger on a certain webpage, it can see if you purchase something, it can see if you put something in your cart on a website and decide not to buy it. It’s kind of like an internet surveillance camera. [1]
So, just with Pixel, Facebook can target ads to users based on the data it collects about what websites they’ve been on recently, what purchases they plan on making, what travel they plan on making, etc. And that’s just the tip of the iceberg. Facebook, among many companies that employ artificial intelligence, machine learning, etc. likely has such intensely layered, complex algorithms at play on the back end of their software, that it’s very possible that the vast majority of employees at Facebook don’t necessarily know how it all exactly works. So while companies like Facebook are notoriously private about how their algorithms and machine learning actually work, it’s very possible that they likely just don’t know, in an entirely comprehensive way, how things work.

So along with privacy issues, the idea of artificial intelligence being a “Black Box,” namely, the idea that data goes in one end and some desired output comes out another end without being entirely sure how it all happened, has certainly seeped into the public sentiment and general fear regarding AI. Already we see this fear of the unknown having tangible effects on the public; a report by AI Now Institute “recommended that public agencies responsible for criminal justice, health care, welfare, and education shouldn’t use such technology.” [2]  While varied in form, I believe the general “Black Box” fear boils down to the idea that computers, and in particular AI, are so advanced and integrated into our everyday lives and we have such a limited understanding as to how any of it actually produces the outputs that it does, that the consequences that can evolve from our lack of understanding as humans could one day have devastating effects, especially as AI continues to progress and software continues to accelerate its rate of learning.

So is this fear, namely, that AI’s seemingly magical and incredibly convoluted methods of producing wide-ranging results could potentially be dangerous or even devastating to society as a whole, valid? Should we run from the “black box” of artificial intelligence and the software that it produces? In my opinion, and the opinion of New York Times journalist, Vijay Pande, no. While in the world of science-fiction, computer learning naturally leads to the enslaving of humans, the lack of knowledge regarding AI and machine-learning processes, should really be seen as something more human than a computer. Just take a moment to think about how many decisions you make every day that you don’t fully understand. As Pande explains:
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies to weather forecasts to the ways in which we approach much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of A.I.: Human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. [3]
Thinking about human intelligence as a “black box” it actually could be reasoned that AI is actually more transparent than much of human knowledge. In contrast to the human mind, decisions and biases can be outright analyzed and scrutinized within the world of artificial intelligence, and as bugs and gaps are found and refined, more insight into the reasoning of AI can be uncovered. So while the unknown may seem daunting, I personally don’t see the growth of Artificial Intelligence as anything to fear.


Comments on other posts:

Christian, I found your post very intriguing. The prospect of being able to 3D print structures would not only be a breakthrough from a structural/materials point of view, but also from a "constructability" point of view. Construction costs and times would be slashed which would drive productivity immensely. Obviously, as the tech progresses, interesting new issues would have to be addressed such as what will happen to the construction management industry and a host of additions and potential amendments to standardized codes. While highly impractical, I think it would be interesting to create a small city of entirely 3D printed residential homes as a sort of "experiment" to see how the life cycle of the structures would hold up. All in all, a very interesting topic. 

Gabe, I found your point on the last article it really interesting, as it touches on an interesting subject regarding the future, and how our society will adapt with technology. One example of how we might not properly adapt is what is called the "Automation Paradox" which is basically the idea that while tasks like driving, calculating, etc. become more and more automated, we lose the need to perform any of those tasks on our own and eventually ignore them entirely. This becomes a serious problem when technology may fail or malfunction, especially in times where quick decision making is the difference between life and death. Take automated cars for example. If we forget to drive all together imagine if the car began to malfunction while inside and the difference between life and death was simply knowing the basics of driving. That's not to say that we should avoid or fear automation/technology, but rather, we should use it as a means of progression, not as a crutch.

Nick, this is really fascinating stuff. The prospect of 3D printing structures on other planets is mind-boggling and incredibly ambitious, yet I feel as though it is also a really practical implementation of the technology. Similar to what you said, while new technologies always bring upon disruption of the current labor force, they also bring upon new jobs that don't exist yet, careers that we can't even think about because they don't exist yet.

Sources:
[1] Goldman, Alex, host. "Is Facebook Spying on You?" Reply All, Glimet Media, 2 Nov. 2017. https://www.gimletmedia.com/reply-all/109-facebook-spying.


[2] Knight, Will. "There’s A Big Problem With AI: Even Its Creators Can’t Explain How It Works." MIT Technology Review. N. p., 2019. Web. 11 Jan. 2019.


[3] Pande, Vijay. "Opinion | Artificial Intelligence’s ‘Black Box’ Is Nothing To Fear." Nytimes.com. N. p., 2019. Web. 11 Jan. 2019.

2 comments:

  1. Albert, I agree with your conclusion, AI is nothing to fear. I feel that most people develop this fear for a few reasons: they don’t understand it, they’re worried lose their jobs to it, or they’re concerned that one day we’ll lose control over it. There are limitations we as a species can’t overcome naturally and technology has always been our way forward, AI is no different. If we continue to work against the tide of progress we’re only hurting ourselves in the long run.

    ReplyDelete
  2. Albert, I really appreciate this post because of how relevant it is to the current times with information being valued even higher than gold. I know for certain there are times when I do a simple google search and then I start seeing ads pop up on related information. I can agree that I don't really think AI's development is something to fear but for some in order to advance a new age of technology that AI is sure to bring a leap of faith is necessary and something scary like complete exposure to an unknown source doesn't really help.

    ReplyDelete

Note: Only a member of this blog may post a comment.