Author - systemdigits.com

Killer robots are almost a reality and need to be banned, warns leading AI scientist

The technology to create killer robots is already here and needs to be banned, a leading artificial intelligence scientist has warned. Stuart Russell, a professor of computer science at Berkeley University, California, said “allowing machines to choose to kill humans” would be “devastating” for world peace and security.

The professor, who has worked in the field of artificial intelligence (AI) for more than 35 years, also warned that the window to ban lethal robots was “closing fast”.

His warning comes as campaigners are making the case at the United Nations (UN) this week for a global prohibition on lethal autonomous weapons systems.

Yesterday the pressure group, the Campaign to Stop Killer Robots, showed a short film it produced to a meeting of countries participating in the Convention on Conventional Weapons, which painted a dystopian scenario based on existing technologies.

The video, entitled ‘Slaughterbots’, starts with an enthusiastic CEO on stage unveiling a new product to an excited crowd. Instead of a new smartphone of consumer tech innovation, he reveals a miniturised drone that uses facial recognition to identify its target before administering a small yet lethal explosive blast to the skull.

The nameless CEO boasts: “A $25 million order now buys this, enough to kill half a city – the bad half. Nuclear is obsolete, take out your entire enemy virtually risk-free. Just characterise him, release the swarm and rest easy.”

However the film shows the weapons quickly falling into the hands of terrorists who use them to slaughter politicians and a classroom of students.

Professor Russell said: “This short film is more than just speculation, it shows the results of integrating and minturising technologies that we already have.

“[AI’s] potential to benefit humanity is enormous, even in defence. But allowing machines to choose to kill humans will be devastating to our security and freedom – thousands of my fellow researchers agree.

“We have an opportunity to prevent the future you just saw, but the window to act is closing fast”.

More than 70 countries participating in the Convention on Conventional Weapons have been meeting in Geneva this week to discuss a potential worldwide ban on lethal robots.

The convention has already prohibited weapons such as blinding lasers before they were widely acquired or used.

Autonomous weapons that have a degree of human control, such as drones, are already used by the militaries of advanced countries such as the UK, US, Israel and China.

The Campaign to Stop Killer Robots is arguing that modern low-cost sensors and recent advances in artificial intelligence have made it possible to design a weapons system that could attack and kill without human control.

Jody Williams, a 1997 Nobel Peace Laureate and co-founder of the campaign, said: “To avoid a future where machines select and attack targets without further human intervention, countries must draw the line against unchecked autonomy in weapon systems.

“With adequate political will, governments can negotiate an international treaty and ban killer robots—fully autonomous weapons— within two years time.”

The pressure group’s concerns echo those voiced by technology billionaire, Elon Musk, earlier this year.

In July the entrepreneur behind companies such as Tesla and SpaceX described AI as the “biggest risk we face as a civilisation” and warned that it needed to be regulated before “people see robots go down the street killing people”.

Cuba Found A Cancer Vaccine! More Than Four Thousand People Have Already Been Cured By It

Here’s the bottom line, the for-profit cancer industry has been lying to us.

Let me first say that most medical professionals have rock solid integrity and truly care about people over profits. However, we can’t ignore the inherent dark side of our current state of for-profit health care.

The truth is, some cancer doctors like the criminal oncologist Dr. Farid Fata falsely diagnosed people with cancer to make money off “treating” them with deadly chemicals known as “chemotherapy.”

Dr. Fata, who worked out of a state-of-the-art cancer center in Detroit, is now a convicted felon.

But there are more criminals working inside the cancer industry: Oncologists, cancer surgeons, breast cancer specialists, and mammography con artists.

Their goal is to scare you with a false positive diagnosis, then convince you to undergo surgery, chemotherapy or radiation therapy that you don’t even need!

Here’s the bottom line, the for-profit cancer industry has been lying to us.

Let me first say that most medical professionals have rock solid integrity and truly care about people over profits. However, we can’t ignore the inherent dark side of our current state of for-profit health care.

The truth is, some cancer doctors like the criminal oncologist Dr. Farid Fata falsely diagnosed people with cancer to make money off “treating” them with deadly chemicals known as “chemotherapy.”

Dr. Fata, who worked out of a state-of-the-art cancer center in Detroit, is now a convicted felon.

But there are more criminals working inside the cancer industry: Oncologists, cancer surgeons, breast cancer specialists, and mammography con artists.

Their goal is to scare you with a false positive diagnosis, then convince you to undergo surgery, chemotherapy or radiation therapy that you don’t even need!

Cuba has long been known for its high-quality cigars, and lung cancer is a major public health problem and the fourth-leading cause of death in the country.

A 2007 study of patients with stages IIIB and IV lung cancer, published in the Journal of Clinical Oncology, confirmed the safety of the CimaVax and showed an increase in tumor-reducing antibody production in more than half of cases.

It proved particularly effective for increased survival if the study participant was younger than 60.

So far, 5,000 patients worldwide have been treated with CimaVax, including 1,000 patients in Cuba.

Lee said the latest Cuban study of 405 patients, which has not yet been published, confirms earlier findings about the safety and efficacy of the vaccine.

What’s more, the shot is cheap — each costs the Cuban government just $1, Wired reported. Studies have found there are no significant side effects.

“We think it may be an effective way to prevent cancer from developing or recurring, so that’s where a lot of our team’s excitement comes in,” Lee said.

“There’s good reason to believe that this vaccine may be effective in both treating and preventing several types of cancer, including not only lung but breast, colorectal, head-and-neck, prostate and ovarian cancers, so the potential positive impact of this approach could be enormous.”

How it works

CimaVax induces people to build antibodies against a certain growth factor that cancer cells make. For people who already have lung cancer, this response results in the body actually getting rid of the cancer cells.

And for people who are currently healthy but at high risk for lung cancer — say, a lung cancer patient in remission — the treatment acts as a vaccine to prevent future relapse.

Johnson envisions that it could one day be a standard preventive vaccine that a person gets in childhood, much like the way we get vaccinated against polio, measles, mumps and rubella.

In addition to CimaVax, Roswell Park scientists are also reviewing other vaccine approaches from researchers at Cuba’s Center of Molecular Immunology (where CimaVax was invented) that could one day help patients overcome brain and pancreas cancer, as well as blood cancers like leukemia and lymphoma.

While they aren’t as far along as CimaVax, Johnson said she is excited for the possibility these other treatments hold.

“They are a very innovative group of scientists, and they have vaccines and drugs that we think could play a very significant role in our fight against cancer,” she said. “We’re delighted to be working with them and we hope very soon that we can start our trial on CimaVax — hopefully the first of many clinical trials to be done with some of these Cuban vaccine approaches.”

To be clear, the CimaVax doesn’t cure cancer. It’s a therapeutic vaccine that works by targeting the tumor itself, specifically going after the proteins that allow a tumor to keep growing. (And as PBS points out, a person can’t just take a shot of CimaVax and continue to smoke without fear of lung cancer.)

“We hope to determine in the next few years whether giving CimaVax to patients who’ve had a lung cancer removed, or maybe even to people at high risk of developing lung or head-and-neck cancers because of a history of heavy smoking, may be beneficial and may spare those people from having a cancer diagnosis or recurrence,” Lee said.

What needs to happen first

Roswell Park faces many bureaucratic hurdles before clinical trials for CimaVax actually begin.

Because an embargo on Cuba is still in effect, Roswell Park had to apply for a license from the Office of Foreign Assets Control at the Treasury Department to bring CimaVax into the U.S. The license allows them to use it for lab research, but not to give it out to patients, Johnson explained.

Then, in order to start testing CimaVax on Americans, Roswell Park has to get approval for a trial from the U.S. Food and Drug Administration. Currently, Roswell Park and the FDA are communicating about the design of the trial, Johnson said.

Once the FDA signs off, Johnson has to submit the project to Roswell Park’s Scientific Review Committee to evaluate its scientific merit, as well as their Institutional Review Board, a body that evaluates the ethical aspects of any medical research involving human subjects.

These two final processes alone can take months to complete, Johnson said, and she hopes to start on Phase 1 and II clinical trials, which assess the effectiveness and safety of a drug, sometime in 2016.

The entire process, from start to finish, can take years to complete — even when the experimental drug was invented in the U.S.

To that end, the United States is currently at work developing two lung cancer vaccines of its own, GVAX and BLP 25, though neither has been studied for as long as CimaVax.

Cuba’s public health record

How does a tiny island nation with limited economic resources pioneer a powerhouse cancer vaccine? “They’ve had to do more with less,” Johnson told Wired in May. “So they’ve had to be even more innovative with how they approach things. For over 40 years, they have had a preeminent immunology community.”

Despite decades of economic problems and the U.S. trade embargo, Cuba has been a model of public health.

According the New York Times, life expectancy for Cubans is 79 years, on par with the United States, despite the fact that its economy per person is eight times smaller.

While many drugs and even anesthesia have been hard to come by over the years, Cuba has one of the best doctor to patient ratios in the world. Moreover, the Cuban government’s investment in primary care for residents and preventative health measures like public education, housing and nutrition have paid huge dividends in the health of citizens, especially relative to similarly poor countries.

Looking forward, ongoing research collaborations between the two nations are almost certainly on the horizon as relations between Cuba and the U.S. continue to thaw.

For now, Lee says the researchers at Roswell Park have their eyes trained on about 20 cancer treatment and prevention technologies in Cuba — including another lung cancer vaccine called racotumomab that the group hopes to study in clinical trials at Roswell.

HOW TO GET THE VACCINE AND HOW CAN YOU CONTACT EXPERTS FROM CUBA?!

EscoZul is produced only in the Cuban company Labiofam: AvenidaIndependencia km. 16 1/2 Boyeros, Santiago de las Vegas, Havana, Cuba Tel: +53 683 3188/683 2151, fax: 683 2151, tel. 537 683 2151 Phone Dr. Verges – radiologist and Niudis Cruz: 537 683 0924, e> mail: niudis.cruz@infomed.sld.cu and labiofam@ceniai.inf.cu.

Source : http://amindunchained.com

Scientists suggest new theory behind the mystery of the Bermuda Triangle

The mystery of the Bermuda Triangle may finally have been solved by a group of satellite meteorologists.

For decades, a series of disappearances within the 500,000km square area between Miami, Puerto Rico and Bermuda has remained unexplained and dismissed as coincidental by many.

The triangle is said to be responsible for the loss of at least 1,000 lives along with some 75 planes and hundreds of ships within the past 100 years.

Scientists have now claimed that hexagonal clouds creating “air-bombs” with winds of up to 170mph could be responsible for hundreds of unsolved incidents at sea.

The storms are said to be so powerful that ships and planes can be plunged into the sea in an instant.

Researchers also noted that large-scale clouds were appearing over the western tip of the island of Bermuda, ranging from 20 to 55 miles wide.

Dr Steve Miller, a satellite meteorologist at Colorado State University, told the Science Channel’s What on Earth programme: “You don’t typically see straight edges with clouds.

“Most of the time, clouds are random in their distribution.“

Using radar satellites to measure what was happening underneath the unusual clouds, the research group found sea level winds were also reaching dangerously high speeds, creating waves as high as 45ft as a result.

Metereologist Randy Cerveny said the hexagonal shapes over the ocean “are in essence air bombs”.

“They are formed by what are called microbursts and they’re blasts of air that come down out of the bottom of a cloud and then hit the ocean,” he explained.

These environmental factors “create waves that can sometimes be massive in size as they start to interact with each other.”

Claims of unusual and ‘paranormal’ occurances were made as far back as 1492, whoever, when Christopher Columbus reported seeing strange lights and compass readings.

An average of four planes and 20 ships are said to go missing in the area each year.

Those ‘biological age’ tests probably can’t tell you how fast you’re aging

Don’t waste money on tests that promise to tell you how fast you’re aging, researchers say. A new study suggests that we don’t know enough about the body’s aging process for them to be useful.

Plenty of expensive tests claim that looking at your genes, a drop of blood, or other physical indicators can tell you your “biological age” — or how fast your body is physically breaking down, regardless of what the calendar says. There is evidence that these markers can give you a rough idea of how healthy your body is. In a study published this week in the American Journal of Epidemiology, scientists looked at the health data of nearly 1,000 New Zealanders, collected from birth to age 38. They analyzed 11 biological markers — including blood markers and gene changes — in the same people from ages 26 to 38. But none of these markers agreed on how fast someone was aging. And many of them weren’t good at predicting things like physical and cognitive decline, as measured by balance, grip, cognitive tests, and how old their faces looked.

People do age at different rates, but there’s no agreement on which measures really tell you the truth, the study shows. Also, aging happens at different rates in different places, according to the researchers, so you might get one answer from blood markers and another from how good your balance is.

One of the most well-hyped measures of biological age involves measuring the length of the ends of DNA, called telomeres. Telomeres are like little caps on the ends of DNA strands that keep the genetic material from fraying, and they become shorter as we grow older. (Three scientists received the 2009 Nobel Prize in Medicine for discovering this fact about telomeres.) But the researchers found that telomere length didn’t predict cognitive or physical decline, though it had some correlation with how old people’s faces looked (as judged by others).

The team also looked at changes at the specific places in the DNA to look for patterns that correlate with aging. The scientists measured three different DNA patterns when the participants were 26, and then again when they were 38. These patterns did show that 12 years had gone by, but, again, it didn’t predict cognitive or physical decline.

Finally, the team used algorithms to see if there was a correlation between physiological markers like lung function and heart function and biological age. They found stronger results than with telomeres or the DNA patterns, but it still wasn’t robust. So, the study concludes, those aging tests are almost certainly premature. All these markers are probably measuring something — but until we have better data, it’s better to save your $300.

The FDA has approved the first digital pill

The Food and Drug Administration has approved the first digital pill for the US which tracks if patients have taken their medication. The pill called Abilify MyCite, is fitted with a tiny ingestible sensor that communicates with a patch worn by the patient — the patch then transmits medication data to a smartphone app which the patient can voluntarily upload to a database for their doctor and other authorized persons to see. Abilify is a drug that treats schizophrenia, bipolar disorder, and is an add-on treatment for depression.

The Abilify MyCite features a sensor the size of a grain of sand made of silicon, copper, and magnesium. An electrical signal is activated when the sensor comes into contact with stomach acid — the sensor then passes through the body naturally. A patch the patient wears on their left rib cage receives the signal several minutes after the pill is ingested. The patch then sends data like the time the pill was taken and the dosage to a smartphone app over Bluetooth, and must be replaced every seven days. The patient’s doctor and up to four other people chosen by the patient, including family members, can access the information. The patient can revoke access at any time.

 

The pill comes after years of research and is a venture between Japanese pharmaceutical company Otsuka and digital medicine service Proteus Digital Health, which makes the sensor. The pill is one way to address the prevalent problem of patients not taking their medication correctly, with the IMS Institute estimating that the improper and unnecessary use of medicine cost the US healthcare sector over $200 billion in 2012. The approval also opens the door for pills that are used for other conditions beyond mental health to be digitized.

Example of a digital medicine system
Photo: Proteus Digital Health

Experts though, have expressed concerns over what the pill might mean for privacy. Some are worried that tracking pills will be a step towards punishing patients who don’t comply. Ameet Sarpatwari, an instructor in medicine at Harvard Medical School told The New York Times the digital pill “has the potential to improve public health. [But] if used improperly, it could foster more mistrust instead of trust.”

The Wall Street Journal reports that the FDA is anticipating a potential raft of approval requests for other digital pills. A spokesperson told the publication the FDA is planning to hire more staff with “deep understanding” of software development in relation to medical devices, and engage with entrepreneurs on new guidelines.

Otsuka hasn’t indicated how much the digitized Abilify pills will cost yet. The WSJ reports the company plans to work with some insurers in covering the digitized pills with production planned to be ramped up only if it can find willing insurers.

Correction November 16th, 11:07am ET: A previous version of this story listed additional features in the MyCite patch that were listed on the manufacturer’s website. The story has been changed to reflect the FDA-approved features.

Flying cars could hit the skies in 2019 after Volvo’s parent company Geely buys ‘street-legal plane’ startup Terrafugia

Geely announced this week that it has acquired Terrafugia’s operations and assets in their entirety.

Mr Li Shufu, founder and chairman of Geely, said: ‘The team at Terrafugia have been at the forefront of believing in and realizing the vision for a flying car and creating the ultimate mobility solution.

‘This is a tremendously exciting sector and we believe that Terrafugia is ideally positioned to change mobility as we currently understand it and herald the development of a new industry in doing so.

‘Our investment in the company reflects our shared belief in their vision and we are committed to extending our full support to Terrafugia, leveraging the synergies provided by our international operations and track record of innovation, to make the flying car a reality.’

Since it was formed in 2006, Terrafugia has been working on flying cars, and has since developed a number of working prototypes.

The firm aims to deliver its first flying car to the market in 2019, with the world’s first vertical take-off and landing (VTOL) flying car being made available by 2023.

Terrafugia’s Transition was recently granted an exemption by the FAA, allowing it to be classified as a ‘light-sport’ craft.

The aircraft has fold-out wings that weigh roughly 1,300 pounds, and have fixed landing gear.

They seat a maximum of two people, including the pilot.

To operate them, one must have a sport pilot certificate, which requires just 20 hours of training.

The craft reaches a cruise speed of 100 mph, and can achieve a range of 400 miles.

And, it can fly to a maximum altitude of 10,000 feet.

The firm’s newer design, the concept TF-X, has fold-out wings with twin electric motors attached to each end, and is expected to cost £183,000 ($261,000).

THE TF-X: KEY SPECIFICATIONS

The vehicle will have a cruising speed of 200 mph (322 km/h), along with a 500-mile (805 km) flight range.

TF-X will have fold-out wings with twin electric motors attached to each end.

These motors allow the TF-X to move from a vertical to a horizontal position, and will be powered by a 300 hp engine.

The planned four-person TF-X will be semi-autonmous and use computer-controlled so that passengers can simply type in a destination before taking off.

<iframe allowfullscreen frameborder=”0″ width=”698″ height=”573″ scrolling=”no” id=”molvideoplayer” title=”MailOnline Embed Player” src=”http://www.dailymail.co.uk/embed/video/1200861.html”></iframe>

TF-X vehicles will be capable of automatically avoiding other air traffic, bad weather, and restricted and tower-controlled airspace.

The vehicle will be able to recharge its batteries either from its engine or by plugging in to electric car charging stations.

It is expected to cost £183,000 ($261,000).

These motors allow the TF-X to move from a vertical to a horizontal position, and will be powered by a 300 horsepower engine.

Thrust will be provided by a ducted fan, and the vehicle will have a cruising speed of 200 mph (322 km/h), along with a 500-mile (805 km) flight range.

Chris Jaran, CEO of Terrafugia, said: ‘After working in the helicopter industry for over 30 years, and the aviation industry in China for 17 years, Terrafugia presents a unique opportunity to be at the forefront of a fledgling but enormously exciting industry.’

How To Free Up Disk Space In Windows 10 Using OneDrive Files On-Demand?

The Windows 10 Fall Creators Update arrived with many handy features. And one FCU feature can help you free up disk space on your hard drive using the power of the cloud. It’s known as OneDrive Files On-Demand.

What is OneDrive Files On-Demand?

If you’re running the Fall Creators Update on your Windows 10 PC, you can edit your files – stored in your OneDrive cloud storage – on your computer without downloading them permanently. It can be done using OneDrive Files On-Demand. This facility can help you free up disk space on your computer which you can use to store other things if you want.

The files on your OneDrive which you have marked “online-only” are downloaded when you want to edit them. For instance, when you want to make changes to your Powerpoint presentation. After you have done the editing, the version of the file on OneDrive is replaced with the new one.

How to use OneDrive Files On Demand?

You should set up OneDrive on your device in advance to turn on the feature. To sign-in, open the OneDrive app from the Start Menu and follow the setup.

onedrive setup screen

Next, follow the steps mentioned below to enable Files On-Demand on Windows 10 FCU:

  1. Right-click the OneDrive icon in the Notifications area.
  2. Go to Settings from the drop-down menu.
    turn on onedrive files on-demand windows 10
  3. Under the Settings tab, tick the checkbox that says “Save space and download files as you use them” to turn on the feature and free up disk space on your machine.
    open one drive settings
  4. Click Ok to save and continue.

After you enable the feature, you can see it working in File Explorer > OneDrive folder. A cloud status icon gets attached to the files and folders which are online-only. You can disable the on-demand feature by following the same steps.

To mark a file or folder as online-only, go the OneDrive folder, right-click the file and click “Free Up Space” in the context menu.

onedrive files on-demand free up space

 

The files you’ve created online or on other devices are online-only by default. If you want a file to be available offline on your current device, click “Always keep on this device” option.

onedrive files on-demand status

One thing to keep in mind is that turning on OneDrive Files On-Demand for one Windows 10 PC won’t enable it across all of your devices. These settings are tied individually to your devices.

Also, I won’t recommend you to use the Files on-demand feature if your internet bandwidth is limited as any change you make to the online-only files would cost you valuable data. As an alternative, you can check out Google’s Backup & Sync tool which automatically syncs the folders on your hard drive.

Google Deletes 300 Apps From Play Store That Powered Android DDoS Botnet “WireX”

 

Android Malware
Source: portal gda/flickr

Massive DDoS attacks on websites and company networks are mostly associated with IoT devices. These next-gen techs serve as an easy to source inventory for the hackers when creating gigantic botnets. But in the recent past, a more common breed of devices popularized by the name Android has become an apparently soft target.Google recently deleted around 300 apps from the official Play Store which were used to create what is being called one of the first Android botnets. Known by the name WireX, it included around 120,000 IP addresses across 100 different countries.

The first hints of WireX existing in the wild date back to August 2, 2017, but it drew significant attention after the attacks that happened on August 17.

According to a report published by the researchers, the apps were available in the form of storage managers, audio/video players, etc. The apps were tasked to make the Android device a part of the WireX. The user was unsuspicious about the apps’ activities, as they could work in the background and use system’s resources.

WireX could send to HTTP junk traffic, with a rate up to 20,000 requests per second, to the target website. Although it’s not something big in magnitude, at least, it could force a search engine to run its CPU horses for nothing.

wirex botnet Android DDos
Image: The Estimated growth of the botnet based on the count of unique IPs per hour observed participating in attacks.

The mushrooming botnet was put to an end by seven companies including Google, CloudFlare, Akamai, Flashpoint, Dyn, RiskIQ, and Team Cymru.

“We believe we identified this botnet and took action while it was still in the early stages of growing,” CloudFlare’s Justin Paine told Ars Technica. That’s one of the main reasons the botnet could be taken down without much difficulty and before the hacker could increase the size of the botnet.

You can protect your device from such malicious apps by enabling the Play Protect feature rolled out by Google recently. The researchers found that the feature was showing warnings for the apps they tested.

Android malware play protect

“Notably, it is no longer possible to install this application as Google’s PlayProtect feature now blocks this app from being installed. Google is also removing it from devices that already have it installed,” the researchers write in their report.

What Is A “Certified Android Device”? Here Is How To Check If You Have It

In the ecosystem of devices running Google’s mobile operating system, “Certified Android Device” is the new buzzword introduced by the company.

As an effort to build users’ trust on the manufacturers and their Android devices, the certification assures that the Android devices they’re purchasing are checked for compatibility and doesn’t include anything malicious.

At the center of the certified Android devices program is Play Protect launched earlier this year, which offers automatic app scanning and ‘Find My Device’ features.

Google says they make sure that Android phones and tablets from their certified device partners stick to Android’s security and permissions models. By running hundreds of tests, the company tries to eliminate the possibility of the devices carrying malware or some adulterated version of pre-installed Google apps.

How to check if you have a certified Android device?

To spot a certified Android at a retail store, according to an earlier blog post by Google, the certified devices would carry a Google Play Protect logo on the packaging.

If you already own an Android phone and want to check whether it’s a certified Android device, you can do so within the Google Play app. Go to Google Play > Settings. Scroll down to the bottom of the Settings screen to find the certification details under “Device Certification”.

certified android device how to check

Currently, there are 100+ partners who are making certified Android devices for their markets. The list includes the likes of Samsung, Motorola, HTC, LG, HMD, Xiaomi, etc. You can check the complete list here which includes other not-so-common brands.

The company has also set up a website that flaunts features of the certified Android devices.

ATM In Computer Networks: History And Basic Concepts

ATM, also known as Asynchronous Transfer Mode, is a telecommunication concept, developed for the data link layer, to carry all sorts of different kinds of data such as web traffic, voice, video, etc., while providing QoS (Quality of Service) at the same time.

History of ATM

Let’s have a look at why the concept of ATM was developed at first hand itself. In the 1990s, the mobile data carrier speed along with the internet speed saw a boom in the transfer rate. On the other hand, other internet technologies such as voice call and video calls had also started to come into the play. So, in a nutshell, it was not only the internet world but also the telephony world which were converging into each other. Thus networking QoS factors such as latency, jitter, data rate, real-time data delivery, etc., became more important.

Nonetheless, the underlying protocols were still the same across the most of the internet world including hardware, protocols, and software. On top of that, there was not any solid concept or technology which, on the telecommunication level, can address the then-lately arising issues at the same time. So, the ATM was born.

Basic ATM concepts

Here are some of the fundamental ATM concepts which are used most often:

Cells

The most basic data transfer units are called cells in the ATM technology. While on the data link layer the data transfer units are known as the frames, the data transfer unit for the ATM concepts was named cells. A cell size is 53 bytes, out of which 5 bytes will be header size and rest of the 48 bytes will be the payload. That means around 10% of the packet size is the header size.

For example, if we transfer 53 GB of data using the ATM technology out of 53 GB, 5 GB will be just the header. To process 5 GB per 53 GB, the processing technology has to be scalable and fast enough to do the job. Can you guess how does ATM address it? Otherwise, keep on reading…

Because of the fixed and smaller sizes (53 Bytes), it was easier to make simpler buffer hardware. But, there was a lot of overhead in sending small amounts of data. Moreover, there was segmentation and reassembly cost as well because of the higher share of the packet header and overhead in processing them.

ATM Layers

The whole ATM concept, just like the OSI system, was also divided into different layers. Each layer was assigned some work to do just like each layer of the OSI system did. We will describe the same later in the article.

Class of services

As discussed above, ATM was introduced to address the problem with the real-time data and data latency problem. A set of class of services was also introduced which were based on certain factors such as bit rate (whether constant or variable), timings (how strict the timing sync would be in between the source and destination) to facilitate users with different requirements, etc. The different class of services has also been mentioned ahead in the article.

Asynchronous Data Transfer

ATM uses Asynchronous data transfer while utilizing statistical multiplexing, unlike circuit-switched network where the bandwidth is wasted when there is no data transfer taking place.

Connection Type

However, it uses a connection-oriented network for data transfer mostly for the purpose of Quality of Services. ATM operates by setting up a connection with the destination using the first cell and the rest of the cells follow the first cell for the data delivery. That means, the order of the packets is guaranteed but not the delivery of the packets.

This connection is called virtual circuit. This means that from the source to the destination, a path has to be found first before the data transfer is initiated. Along the path, all the ATM switches (devices) allocate some resources for the connection based on the class of services which has been chosen by the user.

Transmission Medium

As mentioned before, the main purpose of introducing the ATM concept was to make all types of data transmission independent of a medium. That means, it could work on the telephony as well as on the internet working side as well. The transmission medium could be a CAT6 cable, a fiber cable, a WAN and even packaged inside payload of other carrier systems. With such flexibility of data transfer, ATM was supposed to give a quality service in the most of the cases.

Being independent of a medium also brought a lot of problems with the ATM technology, which were addressed at the convergence layer.

Some of the other interesting facts that arise out of the situations mentioned above are —

  • ATM can also work on CBR (Constant Bit Rate) as well as VBR (Variable bitrate) because it has flexible data transmission efficiency.

Quality of Service and Service categories in ATM:

Quality of Service in ATM technology is handled in different ways based on a certain number of criteria. Based upon it, different service categories have been introduced such as Available Bit rate, Unspecified Bit Rate, Constant Bit Rate, Variable Bit Rate, etc.

ATM Layers

Just like the OSI model, ATM protocol stack has also been divided into three layers with different assigned functionalities:

  • ATM Adaptation Layer (AAL)
    • Convergence Sublayer (CS) and,
    • Segmentation & Reassembly Sublayer (SAR)
  • ATM Layer and,
  • Physical Layer
    • Transmission Convergence Sublayer (TS) and,
    • Physical Medium Sublayer

ATM Adaptation Layer (AAL)

The main function of the adaptation layer is mapping apps to the ATM cells. If you are well aware of the different layer of the OSI system, then you can relate adaptation layer with the application layer of the OSI model.

Besides, the Adaptation layer is also responsible for segmentation of the data packets and reassembling them at the destination host.

Convergence sublayer

We have already seen that the ATM concept was introduced to handle different data types along with variable transmission rate. It is the convergence layer which is responsible for these features.

In easier terms, convergence layer is the layer on which different types of traffic such as voice, video, data, etc., converge.

Convergence layer offers different kinds of services to different applications such as voice, video, browsing, etc. Since different applications need different kinds of data transmission rate, convergence sublayer makes sure that these applications get what they need. A few examples of services offered by the convergence sublayer are here –

CBR (Constant Bit Rate)

CBR provides a guaranteed bandwidth and it is also suitable for real-time traffic. In the CBR mode, the user declares the required rate at the beginning of the connection set up. So, accordingly, the resources are set for the source at different hops or stations. Besides the declared required rate, the user is also guaranteed delay and delay variations.

ABR (Available Bit Rate)

Suitable for bursty traffic and getting feedback about the congestion. In ABR mode, the source relies very much on the network feedback. If bandwidth is available, then more data can be transferred. It helps in achieving maximum throughput with minimum loss.

UBR (Unspecified Bit Rate)

UBR is a cheaper and suitable solution for bursty traffic. In UBR transfer mode, the user does not specify the network about the kind of data and bitrate it is going to utilize.

Thus, the source sends the data packets whenever it wants. However, there are also certain disadvantages to it such as there is no feedback about the sent data packets. As a result, there is no guarantee that the packet will reach its destination. Also, if there is congestion in the network, then cells might be dropped as well.

VBR (Variable Bit Rate)

In the VBR mode, the user needs to declare the maximum and the average bit rate at the beginning. Also, VBR works in the real time as well as non-real time basis.

For example, VBR can be used for the real-time services such as video conferencing. On the other hand, it can also be used for the non-real time services as stored video buffering.

AAL Types & Services offered in details:

Here is a table of the services offered under different AAL types:

AAL Type,Traffic Characteristics
AAL 1,”Connection Oriented, Constant Bit Rate (Eg. Voice)”
AAL 2,”Connection Oriented, Variable Bit Rate (Eg. Packet Based Video)”
AAL 3 / AAL 4,”Connection Oriented, Variable Bit Rate (Eg. File Transfer), Connectionless Variable Bit Rate (LAN Data transfer applications such as frame relay)”
AAL 5,For Bursty Data with error control at the higher layer protocols

Segmentation and Reassembly Sublayer (SAR)

As the name suggests, SAR layer is responsible for segmenting higher layer data into 48 Bytes cells. At the receiving end, the same layer reassembles the data.

When ATM was created, it was envisioned that this technology will be scalable and would support very high speed. To achieve it, the cell size was kept of around 48 Bytes so that segmentation and reassembly could be done faster. Moreover, it also employes ASIC (Application Specific Integrated Circuit) chip in the ATM switch to achieve this.

ATM Layer

  • ATM layer is akin to the network layer of the OSI layer although it also has the data link properties.
  • To set up a connection, the ATM layer uses Globally Unique Address. ATM does not IP system.
  • Path and circuit identifiers are used once a connection has been established.

Why did ATM technology not survive?

You have had enough about ATM technology and the question “Why did ATM technology not survive then even if it was so efficient and advanced?” must have crossed your mind till now.

Even though the ATM addressed a lot of issues and offered a la carte solution to the users need, there were a lot of other things that could not let ATM technology compete for a longer time. Here are some of the reasons:

The speed of the device operation:

Much of the ATM devices operated at a different speed than what we see around. For example, OC3 operated at around 155 Mbps and OC12 operated at 622 Mbps.

Costly & complex Hardware

Because both the telephony and the internet industries were involved, the standardization of this technology took too long. With the time, it also grew too complex, resulting in the sophisticated hardware which was too costly compared to normal computer networks devices around. For example, ATM switches were more expensive as compared to Ethernet switches which work at layer 2.

Non-IP based addressing system

Making the situation worse, most of the networking software and hardware available in the market were based on the IP-based networks but not on the ATM-based networks. So, in the market, not much software was available for ATM. Also, no one wanted to move to ATM by investing a lot of money in buying new hardware.

Delay

Last but not the least, there was too much delay involved in the standardization of the protocols. Also, many leading computer networks companies tried to come up with their proprietaries to influence the market.

Bringing all the companies and working groups (including research and study groups) from telephony as well the internet world delayed the whole standardization process.

Gradually with the time, because of the major issues mentioned above and other minor issues, ATM phased out of the market.