install theme
Aug 21, 2014

(Source: larvalhex, via capsep)

6
6
    28
h
5
Aug 19, 2014
Ñ They don’t make ‘em like this anymore… 

#hiphop #rap #rappers #thesourcemagazine #thesource #magazines #wutang #2pac #notoriousbig #bonethugs #redman #scarface #spice1 #mceiht #drdre #deathrow #doggpound #dazdillinger #kurupt #eazye

They don’t make ‘em like this anymore…

#hiphop #rap #rappers #thesourcemagazine #thesource #magazines #wutang #2pac #notoriousbig #bonethugs #redman #scarface #spice1 #mceiht #drdre #deathrow #doggpound #dazdillinger #kurupt #eazye

6
6
    172
h
5
Aug 16, 2014
Ñ

(Source: youthofparis, via m0nopoly)

6
6
    1240
h
5
Aug 14, 2014

Chet Faker - Gold

6 h
5
Aug 14, 2014

Joey BADA$$ - BIG DUSTY

6
6
    1
h
5
Aug 14, 2014
Ñ Time to look through some old magazines…again. 

#thesource #sourcemagazine #source #wordup #lowrider #vibe #robbreport #magazines #hiphop #music #beforetheinternet #tupac #slickrick #redman #spice1 #mceiht #scarface #dmx

Time to look through some old magazines…again.

#thesource #sourcemagazine #source #wordup #lowrider #vibe #robbreport #magazines #hiphop #music #beforetheinternet #tupac #slickrick #redman #spice1 #mceiht #scarface #dmx

6
6
    7
h
5
Aug 13, 2014
Ñ vicemag:

We Need to Stop Killer Robots from Taking Over the World
Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.
In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.
As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.
Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.
The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).
Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.
Continue

vicemag:

We Need to Stop Killer Robots from Taking Over the World

Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.

In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.

As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.

Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.

The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).

Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.

Continue

6
6
    251
h
5
Aug 12, 2014

Ozzie Beats-Volume 1 (Marble Edition)

6 h
5
Aug 11, 2014

(Source: skeezd, via hip-hop-lifestyle)

6
6
    123898
h
5
Aug 09, 2014
Ñ

(Source: mcxx, via vistale)

6
6
    7599
h
5
^
è