Project 5

Our project 5 may have been posted by another group member (Thom, Katie or Sarah) but I wanted to post it to my blog so that I have a copy of it kept here. We answered option 2 and the link can be found here.

Post 14: Censorship

It was really interesting to me to see how China went about censoring the internet and enforcing the restrictions. I thought the article, “Cracking the ‘Great Firewall’ of China’s Web censorship” gave a lot of information to help better understand why governments limit speech, how they go about it and what the implications are. The article said how China has technically the worlds most sophisticated internet filtering systems, according to the OpenNet Initiative, and that it is much easier for Chinese authorities to monitor all the traffic into and out of the Chinese web. This is because all of the data comes into China through 3 cities whereas in the U.S. data is coming in and out of many various locations. In this way, government officials are able to monitor and police what is entering and exiting the country. The government also demands self-censorship, and enforces heavy censorship controls on the local companies.

Demanding self-censorship. Chinese authorities hold commercial websites responsible for what appears on them. In Beijing — where Internet controls are strictest — authorities issue orders to website managers through cellphone text messages and demand that they comply within 30 minutes, according to a report last fall by Reporters Without Borders.

It is interesting to consider the Yahoo! case and how it affects individuals that are not even in the country. In the article, “Yahoo! in China – Background” it took serious consideration on what Yahoo! has been doing and what it should be doing.

Yahoo!’s own later public admissions, Yahoo! China provided account-holder information, in compliance with a government request, that led to Shi Tao’s sentencing.

Governments around the world are asking companies, including Yahoo!, to comply with their efforts to repress people’s rights to freedom of expression and privacy. Companies must respect human rights, wherever they operate, and Yahoo! must give adequate consideration to the human rights implications of its operations and investments.

The thing is, Yahoo! does not have the right to repress anyones freedom just as much as unjust governments do not. I think it is good to note that Yahoo! has started to make steps to amend this process and change their involvement. I think to some degree at this present time companies like Yahoo! have the ability to be influential on the political sphere in helping ensure human rights, and that it is their duty to do so.

Finally, Yahoo! should not consider it an option to arrange a business relationship with a Chinese Internet company and then cite its own lack of control over its operations as an excuse for not taking pro-active steps to stop involvement in abuse of freedom of expression or privacy rights. But, on more than one occasion, Yahoo! has cited its relationship with Alibaba (Alibaba controls Yahoo! China in exchange for Yahoo!’s 40% ownership share of Alibaba) to explain its lack of ability to resist government requests for user information.

That is why we are supporting the Global Online Freedom Act, which is designed to respond to and prevent censorship and abuse of freedom of expression on the Internet by placing restrictions on U.S. Internet content hosting companies operating in countries that censor, prosecute and/or persecute individuals based on the exercise of such freedoms.

I definitely think that censorship is a major concern. Even though we may not experience intensive censorship, does not mean its not there or not harmful. As Martin Luther King Jr. once said, “Injustice anywhere is a threat to justice everywhere.” I think it is important to think more with those guidelines. Censorship is dangerous, and something that we do not want to spread. In that case, it is a concern that it exists at all.

Post 13: Intelligence

In the article, “What is artificial intelligence?” I read that:

Simply put, artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people — in particular, things associated with people acting intelligently.

But there is still the idea that it is much more complex than this. There is a range from a strong AI to a weak AI,  a  narrow AI and a general AI. When it comes down to it, an AI doesn’t have to function in the same way that humans do. It could have the range and not the depth, or have the depth in a particular area but no the range. The thing is, it just has to be smart, and that could look differently in many different machines.

I think there is too much of a hype for what artificial intelligence actually is. I think people believe that it has to be just like a human to be artificial intelligence. Instead, I think it should be evaluated more as any type of intelligence that can be programmed. So theses “tricks” or “gimmicks” really are an example of artificial intelligence. In the article, “How Google’s AlphaGo Beat a Go World Champion” it commented on how the AlphaGo takes a lot of human intelligence and piles it together at every game:

If AlphaGo had lost to Lee in March, it would only have been a matter of time before it improved enough to surpass him. Go is constantly evolving. What’s considered optimal play changes quickly. Humans have been honing our collective knowledge of the game for more than 2,500 years—the difference is that AlphaGo can do the same thing much, much faster.

It also talks about promising steps for AI’s in the future in “Is AlphaGo Really Such a Big Deal?”:

We have learned to use computer systems to reproduce at least some forms of human intuition. Now we’ve got so many wonderful challenges ahead: to expand the range of intuition types we can represent, to make the systems stable, to understand why and how they work, and to learn better ways to combine them with the existing strengths of computer systems. Might we soon learn to capture some of the intuitive judgment that goes into writing mathematical proofs, or into writing stories or good explanations? It’s a tremendously promising time for artificial intelligence.

In response to if the Turing Test is a valid measure of intelligence I think the answer to that is in relation to what we consider intelligence. There are many different ways to evaluate it and many pros and cons. The article “Computing Machinery and Intelligence” explains the reasoning behind it:

I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think.” The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

While the article “The Turing Test Is Not What You Think It Is” goes further to understand what the Turing test really is trying to investigate:

Good points, all, but they miss the point. It was never Turing’s aim to devise an empirically robust way of telling whether someone or something is really thinking.Can a machine think? For Turing that question was, as he wrote, “too meaningless to deserve discussion.” What is “thinking” anyway? We can hardly hope to make that notion precise.

There is a huge issue with the idea that a computing system could be considered a mind. If so, does turning off a machine constitute killing it? What ethical complications would arise if AI’s got to that point? I think it is interesting to consider the idea that was brought up in “2015 : What do you think about machines that think?” where there could be two separate societies. I think it would be a lot easier on an ethical standpoint if we consider humans not biological computers and computing systems not minds, but if that is the trend that will be taken it is important to be able to lay out the ethical guidelines to deal with these issues.

 But wait! Should we also ask what machines that think, or, “AIs”, might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is “their” society “our” society? Will we, and the AIs, include each other within our respective circles of empathy?