­
Welcome, Guest
Username: Password: Remember me

TOPIC: Impact of Technology on Society

Impact of Technology on Society 31 Mar 2017 02:54 #539

  • Brutus
  • Brutus's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 80
  • Thank you received: 147
This is a mature and well referenced article about the inevitability of automation and its consequences


AUTOMATION IS SET TO HIT WORKERS IN DEVELOPING COUNTRIES HARD
The Fourth Industrial Revolution could bring mass global unemployment



Friday, Treasury Secretary Steve Mnuchin said he’s “not worried at all” about artificial intelligence replacing human workers because it's “50-100 more years” off.

In reality, data shows this is already happening — with an estimated 38 percent of existing U.S. jobs at risk of being turned over to machines by 2030, according research from PwC. Another study put out by the University of Oxford last year had similar estimates: The researchers found that 47 percent of US jobs were at risk of automation in the next two decades.
But despite justified fears of obsolescence in the West, it is actually developing economies that are poised to be hit the hardest by fourth Industrial Revolution, or “Industry 4.0,” where machines are networked together in “smart factories” that have little need for human input.

This is already evident in China, where the domestic economy exploded in the last two decades thanks to Western companies that moved their manufacturing operations there. These same factories are now turning to intelligent machines as replacements for human labor.
Take, for instance, an electronics factory in Shanghai that has already replaced two-thirds of its human workforce, and plans to become 90 percent automated in the coming years.
Or the Foxconn factory that manufactures electronics for companies like Apple, which recently replaced 60,000 workers with robots in a single factory. And these examples are far from anecdotal.

China already purchases far more industrial robots than any other country, and by next year will have outpaced the two leading manufacturing nations — the United States and Germany — in terms of the number of industrial robots used in the country.

At a time when manufacturing employment is disappearing across the globe, the fear is that the industrial internet and its automated manufacturing army will lead to mass unemployment. Considering that China accounts for about a quarter of global manufacturing output and employs well over 100 million people in the manufacturing sector, such fears are not unfounded.

A World Bank report found that two-thirds of all jobs in the developing world face being automated out of existence, although the rate that this will happen is uncertain and “depends on the the pace of technological disruption.” Moreover, despite Western fears of the coming robo-apocalypse, the report also notes that the “share of occupations that could experience significant automation is actually higher in developing countries than in more advanced ones, where many of these jobs have already disappeared.”
The question is, assuming that this trend toward the networked automation of factories continues — and there is little evidence to suggest that it won’t — what happens next?

While there is no perfect analog, there is a precedent to consider here. Beginning in the 1980s, manufacturers in the United States and Western Europe began to relocate their factories overseas in search of cheaper labor. The effects of offshoring were immediately apparent in terms of manufacturing employment, which [url=https://www.brookings.edu/blog/the-avenue/2016/03/15/voter-anger-explained-in-one-chart/dropped severely[/url] in the United States between 1980 and today.

Economists refer to this process as deindustrialization, which is a way of describing a country’s transition from a manufacturing to a service-based economy.

For developed countries, industrialization was the path to wealth. Research has shown that as incomes rise, the demand for goods rise less relative to demand for services. In other words, deindustrialization and the transition to a service economy seemed like a natural result of a fully industrialized country.

But if the process of deindustrialization began as arbitrage*, where a handful of mature industrial economies in the United States and Western Europe deindustrialized by sending manufacturing abroad in search of cheaper labor, it's now driven by technological advancements. Instead of moving elsewhere, manufacturers are opting to stay in Asia to focus on the burgeoning national economies in the region, which means that deindustrialization in these countries is fueled not by reshoring but by the rise of the automated, networked factory.

The problem is that, whereas countries like the United States had the opportunity to become fully industrialized (read: wealthy) before the process of deindustrialization began, developing industrial economies like China never had a chance to reach maturity before deindustrializing.

As Harvard economist Dani Rodrik noted in a recent article for The New York Times, the onset of premature deindustrialization in the developing world means that income for these workers has peaked at far lower levels than in the United States and Western Europe before the transition to a service economy began.

In other words, wages topped out too low for the citizens of developing countries to turn them into the consuming force that is necessary to sustain the ramped-up production enabled by the fourth industrial revolution.

Moreover, the nature of the service economy is itself changing and can no longer be seen as a safe and reliable alternative to manufacturing work, thanks to the “Uberization” of an increasing number of services.
“Phenomena such as Uber have a very strong impact on the service sector,” Jörg Mayer, senior economist at the UN Trade and Development Conference, told The Outline. “They will not have a positive effect on the creation of jobs, certainly not on the level of wages.”
“What you find in sectors where you have Uberization is very precarious engagement—you have to be on call and you’re only paid for the time you actually work and the wages are very low, and then there’s issues of social insurance, unemployment insurance, and so on,” he added. And that doesn't account for Uber's plans to eventually replace is drivers with self-driving cars.

How this process will play out in the long term is still quite uncertain, largely due to the relative novelty of Industry 4.0 — there simply isn’t enough data available yet to get a clear picture of how networked robots are affecting manufacturing employment. Some point to China’s growing manufacturing workforce as a counterpoint to those forecasting a robo-apocalypse in the country’s manufacturing sector.

Moreover, the fourth industrial revolution will also create some new jobs. Unfortunately, these new jobs won’t be enough to offset the number of low-skilled jobs lost to automation and will be pooled in the high-skilled sector.
In the absence of a heavy tax on robots, the report notes, “developing countries should embrace the digital revolution” by “redesigning education systems to create the managerial and labour skills needed to operate new technologies.”

“The digital economy enables the provision of services that didn’t exist before,” Mayer said, citing how cell phones have allowed those in rural areas to have access to banking or education services. “If you look at the effect of the digital revolution on the provision of services in the developing world, there is a strong positive effect. One always hears about adverse employment effect in the services sector through the digital revolution, but this evidence is in the advanced economies, where you actually have these services.”

“One has to be careful to simply transfer evidence that is undoubtedly there for advanced economies onto the developing countries,” he added.
Yet many economists aren’t quite as optimistic. Aside from the gloomy forecasts of the UN Trade and Development Conference Report, the 2015 World Economic Forum meeting was also focused on Industry 4.0 and its discontents.

According to one of the Davos reports, the fourth industrial revolution will destroy some 5 million (mostly white-collar) jobs in 15 industrialized and developing countries by 2020. But for Davos attendees like German chancellor Angela Merkel, noted evangelist of Industry 4.0, this was not a good reason to reconsider networking the world’s factories. Rather, those previously employed in manufacturing would simply have to accept the unspoken mantra of the fourth industrial revolution: adapt or perish.

Of course, some will argue that this has always been the case, from the first three industrial revolutions — defined by the creation of the factory in the late 18th century, the assembly line in the early 20th century, and increasing factory automation beginning in the 1980s — to the networked factory of industry 4.0. But never before have machines threatened to render two-thirds of the jobs in the developing world (which accounts for some two-thirds of the globe, depending on how you measure) obsolete.

Coupled with similarly bleak projections in developed countries, the rise of Industry 4.0 begins to look less like a paradigm shift and more like a paradigm collapse. But whether this robot utopia will lead to the life without work we’ve always been promised, or some post-industrial hell that makes Black Mirror seem like a comedy, remains, in the cautiously optimistic words of the UN report, “an open question.”


Source: The Outline


* the simultaneous buying and selling of securities, currency, or commodities in different markets or in derivative forms in order to take advantage of differing prices for the same asset.
To eliminate unemployment one needs only to eliminate the unemployed
The administrator has disabled public write access.
The following user(s) said Thank You: Paul-UB40, El-dudeareno

Impact of Technology on Society 31 Mar 2017 03:19 #540

  • Brutus
  • Brutus's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 80
  • Thank you received: 147
An article from 2013. If you do not read much about technological impact and futurism, read at least this.

Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.

Welcome, Robot Overlords. Please Don't Fire Us?



THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.

This image is hidden for guests. Please log in or register to see it.



The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.

Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover of Wired. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when the Dartmouth Summer Research Project on Artificial Intelligence gave the field its name, and there are only so many times you can cry wolf.

Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.

But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.

Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.

By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.

At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.

So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?

But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.

This image is hidden for guests. Please log in or register to see it.


IF YOU HAVE ANY KIND OF BACKGROUND in computers, you've already figured out that I didn't pick these numbers out of a hat. I started in 1940 because that's about when the first programmable computer was invented. I chose a doubling time of 18 months because of a cornerstone of computer history called Moore's Law, which famously estimates that computing power doubles approximately every 18 months. And I chose Lake Michigan because its size, in fluid ounces, is roughly the same as the computing power of the human brain measured in calculations per second.

In other words, just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.

And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke. Sure, it's a billion times more than the first computer had, but it's still not much more than the computing power of a hamster.

This is why, even with the IT industry barreling forward relentlessly, it has never seemed like we were making any real progress on the AI front. But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low. At one point, for example, we thought that playing chess at a high level would be a mark of human-level intelligence. Then, in 1997, IBM's Deep Blue supercomputer beat world champion Garry Kasparov, and suddenly we decided that playing grandmaster-level chess didn't imply high intelligence after all.

So maybe translating human languages would be a fair test? Google Translate does a passable job of that these days. Recognizing human voices and responding appropriately? Siri mostly does that, and better systems are on the near horizon. Understanding the world well enough to win a round of Jeopardy! against human competition? A few years ago IBM's Watson supercomputer beat the two best human Jeopardy! champions of all time. Driving a car? Google has already logged more than 300,000 miles in its driverless cars, and in another decade they may be commercially available.

The truth is that all this represents more progress toward true AI than most of us realize. We've just been limited by the fact that computers still aren't quite muscular enough to finish the job. That's changing rapidly, though. Computing power is measured in calculations per second—a.k.a. floating-point operations per second, or "flops"—and the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.

Of course, raw speed isn't everything. Livermore's Blue Gene/Q fills a room, requires eight megawatts of power to run, and costs about $250 million. What's more, it achieves its speed not with a single superfast processor, but with 1.6 million ordinary processor cores running simultaneously. While that kind of massive parallel processing is ideally suited for nuclear-weapons testing, we don't know yet if it will be effective for producing AI.

But plenty of people are trying to figure it out. Earlier this year, the European Commission chose two big research endeavors to receive a half billion euros each, and one of them was the Human Brain Project led by Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. He uses another IBM super­computer in a project aimed at modeling the entire human brain. Markram figures he can do this by 2020.

That might be optimistic. At the same time, it also might turn out that we don't need to model a human brain in the first place. After all, when the Wright brothers built the first airplane, they didn't model it after a bird with flapping wings. Just as there's more than one way to fly, there's probably more than one way to think, too.

Google's driverless car, for example, doesn't navigate the road the way humans do. It uses four radars, a 64-beam laser range finder, a camera, GPS, and extremely detailed high-res maps. What's more, Google engineers drive along test routes to record data before they let the self-driving cars loose.

Is this disappointing? In a way, yes: Google has to do all this to make up for the fact that the car can't do what any human can do while also singing along to the radio, chugging a venti, and making a mental note to pick up the laundry. But that's a cramped view. Even when processing power and software get better, there's no reason to think that a driverless car should replicate the way humans drive. They will have access to far more information than we do, and unlike us they'll have the power to make use of it in real time. And they'll never get distracted when the phone rings.

In other words, you should still be impressed. When we think of human cognition, we usually think about things like composing music or writing a novel. But a big part of the human brain is dedicated to more prosaic functions, like taking in a chaotic visual field and recognizing the thousands of separate objects it contains. We do that so automatically we hardly even think of it as intelligence. But it is, and the fact that Google's car can do it at all is a real breakthrough.

The exact pace of future progress remains uncertain. For example, some physicists think that Moore's Law may break down in the near future and constrain the growth of computing power. We also probably have to break lots of barriers in our knowledge of neuroscience before we can write the software that does all the things a human brain can do. We have to figure out how to make petaflop computers smaller and cheaper. And it's possible that the 10-petaflop estimate of human computing power is too low in the first place.

Nonetheless, in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.
In other words, by about 2040 our robot paradise awaits.



Source: Mother Jones
Attachments:
  • Attachment This image is hidden for guests. Please log in or register to see it.
To eliminate unemployment one needs only to eliminate the unemployed
The administrator has disabled public write access.
The following user(s) said Thank You: Paul-UB40, El-dudeareno

Impact of Technology on Society 05 Apr 2017 01:13 #981

  • Brutus
  • Brutus's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 80
  • Thank you received: 147
There is really a future for us outside UBI?


Pay crash expected in online gig economy as millions seek work


A huge number of people in South-East Asia and sub-Saharan Africa looking for online “gig economy” work could cause a race to the bottom on pay and conditions, according to a new report from the Oxford Internet Institute.

Millions of people in countries including Kenya, Nigeria, South Africa, Vietnam, Malaysia and the Philippines are signed up to websites that pay them to complete tasks such as data entry, transcription and graphic design. The jobs can last minutes or months, and are generally outsourced from companies in richer countries.

The researchers didn’t name the sites they looked at, but Mark Graham, one of the authors of the report, says they are comparable to freelancer.com, Upwork and Amazon’s Mechanical Turk. “The sheer variety of people doing this work is surprising. Almost any sort of work is being done digitally. There’s no model,” he says.

Over three years, Graham and his colleagues conducted 152 interviews and surveyed 456 workers. They found that workers enjoyed the higher levels of autonomy and pay offered by online work compared with some local job opportunities, but that an increasing over supply of labour is worsening conditions.

Surge in demand

Seventy per cent of people surveyed said gig work was one of their main sources of income, but nearly half said they felt easily replaceable. One major gig work platform had 1.75 million worker profiles, but only 200,000 of them had completed an hour’s work or earned at least US$1.

The researchers suggest demand for such work will get higher, because a billion more people are expected to get online by 2020, the majority of whom will come from low- and middle-income countries where online gig work is most common. “There will be a huge surge in demand for online jobs, and unless strategies and policies are put in place it will be a race to the bottom,” says Graham.

At the moment, these gig workers don’t have employee protection in the country where the work is generated or where it is being carried out. The websites have no mechanisms in place to offer benefits like holiday pay or sick leave, and as the work they offer is computer-based and can theoretically be done anywhere, workers are easily replaced.

If workers in one country demanded better pay, for example, workers in other countries could simply lower their rates to get more jobs. The same goes for if one country tightened regulations.

Who protects the workers?

“People are undercutting each other on pay and conditions. But we don’t know whose job it is to protect worker rights,” says Eddie Keane at the University of Limerick, Ireland.

The report argues that because only a handful of countries are responsible for the demand in digital work, concentrated mostly in North America and western Europe, it should be these countries that enforce a minimum standard. That way, workers would have their rights protected regardless of their location.
“If people are relying on this work to survive, we need mechanisms to protect them if they get sick, pregnant, or if an employer doesn’t pay up,” says Graham. “At the moment we don’t have that.”

But this would be incredibly hard to enforce, says Andrea Broughton at the Institute for Employment Studies, UK. “A lot of work goes under the radar, not registered by any regulators. It’s very messy and difficult to manage. How do you even start to work out what to regulate?”


Source: New Scientist
To eliminate unemployment one needs only to eliminate the unemployed
The administrator has disabled public write access.

Impact of Technology on Society 05 Apr 2017 12:07 #1020

  • El-dudeareno
  • El-dudeareno's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 143
  • Thank you received: 109
Do not let the behaviour of others destroy your inner peace.
The administrator has disabled public write access.

Impact of Technology on Society 26 Jun 2017 15:10 #3397

  • El-dudeareno
  • El-dudeareno's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 143
  • Thank you received: 109
"What jobs will still be around in 20 years? " :huh:

I was wondering what percentage the 'Work-roach jobs' would fall under the 'Chance of automation'... :S

www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health

Do not let the behaviour of others destroy your inner peace.
The administrator has disabled public write access.

Impact of Technology on Society 26 Jun 2017 15:42 #3401

  • Paul-UB40
  • Paul-UB40's Avatar
  • OFFLINE
  • Moderator
  • Posts: 964
  • Thank you received: 476
Its Interesting to think; 40 Years ago we were all told "In the wonderful World of Tomorrow" due to automation & Computers,
we would have far more leisure time to fulfill our Hobbies and enrich our life. B)
What no one thought about was HOW we were going to do this with No Income.
Except for the filthy rich 5% that is. :(
Been a Few Places; Done a Few Things
The administrator has disabled public write access.

Impact of Technology on Society 28 Jun 2017 12:52 #3425

  • El-dudeareno
  • El-dudeareno's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 143
  • Thank you received: 109
You say that " we would have far more leisure time to fulfill our Hobbies and enrich our life. B)". Maybe they feel this might of apply to some people : :whistle: :P :whistle:

"The devil finds work for idle hands. something that you say which means people who have no work or activity are more likely to do things they should not do, such as commit crimes". :pinch:



idioms.thefreedictionary.com/The+devil+finds+work+for+idle+hands
Do not let the behaviour of others destroy your inner peace.
The administrator has disabled public write access.

Impact of Technology on Society 06 Jul 2017 03:26 #3548

  • Brutus
  • Brutus's Avatar
  • OFFLINE
  • Premium Member
  • Posts: 80
  • Thank you received: 147
"The devil finds work for idle hands."


It is an old saying but probably true.
Indeed "idle hands" can find that the devil is not such a bad guy after all and even start questioning the wisdom of God.
Dangerous topics indeed for anyone that benefit from others misery in any society...

To eliminate unemployment one needs only to eliminate the unemployed
The administrator has disabled public write access.
The following user(s) said Thank You: Paul-UB40, El-dudeareno

Impact of Technology on Society 06 Jul 2017 15:36 #3562

  • Paul-UB40
  • Paul-UB40's Avatar
  • OFFLINE
  • Moderator
  • Posts: 964
  • Thank you received: 476
Indeed Brutus; The Old ideas of God and the Devil belongs back in the Dark past of Human History;
Invented by primitive man to give them hope for the future; Such Ideas are now long gone.
Today man has to adapt, To Fight, to strive for a better future, If not for us but for our Children's Children.
Been a Few Places; Done a Few Things
The administrator has disabled public write access.
The following user(s) said Thank You: Brutus, El-dudeareno
­