Sample Text

13 programming languages defining the future of coding

13 programming languages defining the future of coding


1. R

At heart, R is a programming language, but it's more of a standard bearer for the world's current obsession with using statistics to unlock patterns in large blocks of data. R was designed by statisticians and scientists to make their work easier. It comes with most standard functions used in data analysis and many of the most useful statistical algorithms are already implemented as freely distributed libraries. It's got most of what data scientists need to do data-driven science.
Many people end up using R inside an IDE as a high-powered scratchpad for playing with data. R Studio and R Commander are two popular front ends that let you load up your data and play with it. They make it less of a compile-and-run language and more of an interactive world in which to do your work.

2. Java 8

Java isn't a new language. It's often everyone's first language, thanks to its role as the lingua franca for AP Computer Science. There are billions of JAR files floating around running the world.
But Java 8 is a bit different. It comes with new features aimed at offering functional techniques that can unlock the parallelism in your code. You don't have to use them. You could stick with all the old Java because it still works. But if you don't use it, you'll be missing the chance to offer the Java virtual machine (JVM) even more structure for optimizing the execution. You'll miss the chance to think functionally and write cleaner, faster, and less buggy code.

 

3. Swift

Apple saw an opportunity when programming newbies complained about the endless mess of writing in Objective C. So they introduced Swift and strongly implied that it would replace Objective C for writing for the Mac or the iPhone. They recognized that creating header files and juggling pointers was antiquated. Swift hides this information, making it much more like writing in a modern language like Java or Python. Finally, the language is doing all the scot work, just like the modern code.
The language specification is broad. It's not just a syntactic cleanup of Objective C. There are plenty of new features, so many that they're hard to list. Some coders might even complain that there's too much to learn, and Swift will make life more complicated for teams who need to read each other's code. But let's not focus too much on that. iPhone coders can now spin out code as quickly as others. They can work with a cleaner syntax and let the language do the busy work.

4. Go

When Google set out to build a new language to power its server farms, it decided to build something simple by throwing out many of the cleverer ideas often found in other languages. They wanted to keep everything, as one creator said, "simple enough to hold in one programmer's head." There are no complex abstractions or clever metaprogramming in Go—just basic features specified in a straightforward syntax.
This can make things easier for everyone on a team because no one has to fret when someone else digs up a neat idea from the nether reaches of the language specification.

5. Coffee Script

Somewhere along the line, some JavaScript programmers grew tired of typing all those semicolons and curly brackets. So they created Coffee Script, a preprocessing tool that turns their syntactic shorthand back into regular JavaScript. It's not as much a language as a way to save time hitting all those semicolons and curly bracket keys.
Jokers may claim that Coffee Script is little more than a way to rest your right hand's pinkie, but they're missing the point. Cleaner code is easier to read, and we all benefit when we can parse the code quickly in our brain. Coffee Script makes it easier for everyone to understand the code, and that benefits everyone.

6. D

For many programmers, there's nothing like the very clean, simple world of C. The syntax is minimal and the structure maps cleanly to the CPU. Some call it portable Assembly. Even for all these advantages, some C programmers feel like they're missing out on the advantages built into newer languages.
That's why D is being built. It's meant to update all the logical purity of C and C++ while adding in modern conveniences such as memory management, type inference, and bounds checking.

7. Less.js

Just like Coffee Script, Less.js is really just a preprocessor for your files, one that makes it easier to create elaborate CSS files. Anyone who has tried to build a list of layout rules for even the simplest website knows that creating basic CSS requires plenty of repetition; Less.js handles all this repetition with loops, variables, and other basic programming constructs. You can, for instance, create a variable to hold that shade of green used as both a background and a highlight color. If the boss wants to change it, you only need to update one spot.
There are more elaborate constructs such as mix ins and nested rules that effectively create blocks of standard layout commands that can be included in any number of CSS classes. If someone decides that the bold typeface needs to go, you only need to fix it at the root and Less.js will push the new rule into all the other definitions.

8. MATLAB

Once upon a time, MATLAB was a hardcore language for hardcore mathematicians and scientists who needed to juggle complex systems of equations and find solutions. It's still that, and more of today's projects need those complex skills. So MATLAB is finding its way into more applications as developers start pushing deeper into complex mathematical and statistical analysis. The core has been tested over the decades by mathematicians and now it's able to help mere mortals.

9. Arduino

The Internet of Things is coming. More and more devices have embedded chips just waiting to be told what to do. Arduino isn't so much a new language as a set of C or C++ functions that you string together. The compiler does the rest of the work.
Many of these functions will be a real novelty for programmers, especially programmers used to create user interfaces for general computers. You can read voltages, check the status of pins on the board, and of course, control just how those LEDs flash to send inscrutable messages to the people staring at the device.

10. CUDA

Most people take the power of their video cards for granted. They don't even think about how many triangles the video card is juggling, as long as their world is a complex, first-person shooter game. But if they would only look under the hood, they would find a great deal of power ready to be unlocked by the right programmer. The CUDA language is a way for Nvidia to open up the power of their graphics processing units (GPUs) to work in ways other than killing zombies or robots.
The key challenge to using CUDA is learning to identify the parallel parts of your algorithm. Once you find them, you can set up the CUDA code to blast through these sections using all the inherent parallel power of the video card. Some jobs, like mining Bitcoins, are pretty simple, but other challenges, like sorting and molecular dynamics, may take a bit more thinking. Scientists love using CUDA code for their large, multidimensional simulations.

11. Scala

Everyone who's taken an advanced course in programming languages knows the academic world loves the idea of functional programming, which insists that each function have well-defined inputs and outputs but no way of messing with other variables. There are dozens of good functional languages, and it would be impossible to add all of them here. Scala is one of the best-known, with one of the larger user bases. It was engineered to run on the JVM, so anything you write in Scala can run anywhere that Java runs—which is almost everywhere.
There are good reasons to believe that functional programming precepts, when followed, can build stronger code that's easier to optimize and often free of some of the most maddening bugs. Scala is one way to dip your toe into these waters.

12. Haskell

Scala isn't the only functional language with a serious fan base. One of the most popular functional languages, Haskell, is another good place for programmers to begin. It's already being used for major projects at companies like Facebook. It's delivering real performance on real projects, something that often isn't the case for academic code.

13. Jolt

When XML was the big data format, a functional language called XSLT was one of the better tools for fiddling with large datasets coded in XML. Now that JSON has taken over the world, Jolt is one of the options for massaging your JSON data and transforming it. You can write simple filters that extract attributes and JOLT will find them and morph them as you desire.



Scientists turn memory chips into processors to speed up computing tasks??


A team of international scientists have found a way to make memory chips perform computing tasks, which is traditionally done by computer processors like those made by Intel and Qualcomm.
This means data could now be processed in the same spot where it is stored, leading to much faster and thinner mobile devices and computers.
This new computing circuit was developed by Nanyang Technological University, Singapore (NTU Singapore) in collaboration with Germany's RWTH Aachen University and Forschungszentrum Juelich, one of the largest interdisciplinary research centers in Europe.
It is built using state-of-the-art memory chips known as Redox-based resistive switching random access memory (ReRAM). Developed by global chipmakers such as SanDisk and Panasonic, this type of chip is one of the fastest memory modules that will soon be available commercially.
However, instead of storing information, NTU Assistant Professor Anupam Chattopadhyay in collaboration with Professor Rainer Waser from RWTH Aachen University and Dr Vikas Rana from Forschungszentrum Juelich showed how ReRAM can also be used to process data.
This discovery was published recently in Scientific Reports.
Current devices and computers have to transfer data from the memory storage to the processor unit for computation, while the new NTU circuit saves time and energy by eliminating these data transfers.
It can also boost the speed of current processors found in laptops and mobile devices by at least two times or more.
By making the memory chip perform computing tasks, space can be saved by eliminating the processor, leading to thinner, smaller and lighter electronics. The discovery could also lead to new design possibilities for consumer electronics and wearable technology.

How the new circuit works
Currently, all computer processors in the market are using the binary system, which is composed of two states -- either 0 or 1. For example, the letter A will be processed and stored as 01000001, an 8-bit character.
However, the prototype ReRAM circuit built by Asst Prof Chattopadhyay and his collaborators processes data in more than just two states. For example, it can store and process data as 0, 1, or 2, known as a ternary number system.
Because ReRAM uses different electrical resistance to store information, it could be possible to store the data in an even higher number of states, hence speeding up computing tasks beyond current limitations.
Asst Prof Chattopadhyay who is from NTU's School of Computer Science and Engineering, said in current computer systems, all information has to be translated into a string of zeros and ones before it can be processed.
"This is like having a long conversation with someone through a tiny translator, which is a time-consuming and effort-intensive process," he explained. "We are now able to increase the capacity of the translator, so it can process data more efficiently."
The quest for faster processing is one of the most pressing needs for industries worldwide, as computer software is getting increasingly complex while data centres have to deal with more information than ever.
The researchers said that using ReRAM for computing will be more cost-effective than other computing technologies on the horizon, since ReRAMs will be available in the market soon.
The excellent properties of ReRAM like its long-term storage capacity, low energy usage and ability to be produced at the nanoscale level have drawn many semiconductor companies to invest in researching this promising technology.
The research team is now looking to engage industry partners to leverage this important advance of ReRAM-based ternary computing.
Moving forward, the researchers will also work on developing the ReRAM to process more than its current four states, which will lead to great improvements of computing speeds as well as to test its performance in actual computing scenarios.

Journal Reference:

1.   Wonjoo Kim, Anupam Chattopadhyay, Anne Siemon, Eike Linn, Rainer Waser, Vikas Rana. Multistate Memristive Tantalum Oxide Devices for Ternary ArithmeticScientific Reports, 2016; 6: 36652 DOI: 10.1038/srep36652

Chip-sized, high-speed terahertz modulator raises possibility of faster data transmission..



Tufts University engineers have invented a chip-sized, high-speed modulator that operates at terahertz (THz) frequencies and at room temperature at low voltages without consuming DC power. The discovery could help fill the "THz gap" that is limiting development of new and more powerful wireless devices that could transmit data at significantly higher speeds than currently possible.
Measurements show the modulation cutoff frequency of the new device exceeded 14 gigahertzes and has the potential to work above 1 THz, according to a paper published online in Scientific Reports. By contrast, cellular networks occupy bands that are much lower on the spectrum where the amount of data that can be transmitted is limited.
The device works through the interaction of confined THz waves in a novel slot waveguide with tunable, two-dimensional electron gas. The prototype device operated within the frequency band of 0.22-0.325 THz, which was chosen because it corresponded to available experimental facilities. The researchers say the device would work within other bands as well.
Although there is significant interest in using the THz band of the electromagnetic spectrum, which would enable the wireless transmission of data at speeds significantly faster than conventional technology, the band has been underutilized in part because of a lack of compact, on-chip components, such as modulators, transmitters, and receivers.
"This is a very promising device that can operate at terahertz frequencies, is miniaturized using mainstream semiconductor foundry, and is in the same form factor as current communication devices. It's only one building block, but it could help to start filling the THz gap," said Sameer Sonkusale, Ph.D., of Nano Lab, Department of Electrical and Computer Engineering, Tufts University, and the paper's corresponding author.

Journal Reference:
1.   P. K. Singh, S. Sonkusale. High Speed Terahertz Modulator on the Chip Based on Tunable Terahertz Slot WaveguideScientific Reports, 2017; 7: 40933 DOI: 10.1038/SREP40933


Patients' electrocardiograph readings would be used as an encryption key to access their medical records



Researchers at Binghamton State University in New York think your heart could be the key to your personal data. By measuring the electrical activity of the heart, researchers say.

they can encrypt patients' health records.

The fundamental idea is this: In the future, all patients will be outfitted with a wearable device, which will continuously collect physiological data and transmit it to the patients' doctors. Because electrocardiogram (ECG) signals are already collected for clinical diagnosis, the system would simply reuse the data during transmission, thus reducing the cost and computational power needed to create an encryption key from scratch.

“There have been so many mature encryption techniques available, but the problem is that those encryption techniques rely on some complicated arithmetic calculations and random key generations," said Zhanpeng Jin, a co-author of the paper "A Robust and Reusable ECGbased Authentication and Data Encryption Scheme for eHealth Systems."

Those encryption techniques can't be "directly applied on the energy-hungry mobile and wearable devices," Jin added. "If you apply those kinds of encryptions on top of the mobile device, then you can burn the battery very quickly."

But there are drawbacks. According to Jin, one of the reasons ECG encryption has not been widely adopted is because it's generally more sensitive and vulnerable to variations than some other biometric measures. For instance, your electrical activity could change depending on factors such as physical exertion and mental state. Other more permanent factors such as age and health can also have an effect.

“ECG itself cannot be used for a biometric authentication purpose alone, but it’s a very effective way as a secondary authentication,” Jin said.
While the technology for ECG encryption is already here, its adoption will depend on patients' willingness to don wearables and on their comfort with constantly sharing their biometrics.

Apple, Google, and Uber join list of tech companies refusing to build Muslim registry..?




Apple, Google, and Uber have all broken their respective silences on whether they would participate in helping build a Muslim registry for the incoming Trump administration, an Apple spokesperson said, “We think people should be treated the same no matter how they worship, what they look like, who they love. We haven’t been asked and we would oppose such an effort.”
Earlier today, a Google spokesperson issued a statement saying, “In relation to the hypothetical of whether we would ever help build a ‘Muslim registry’ — we haven’t been asked, of course we wouldn’t do this and we are glad — from all that we’ve read — that the proposal doesn’t seem to be on the table.” Meanwhile, Uber responded with a terse “no” in response to a similar inquiry.

“WE ARE GLAD... THAT THE PROPOSAL DOESN’T SEEM TO BE ON THE TABLE.”

These are just the latest — but arguably among the most important and high-profile  Silicon Valley players to go on record refusing to build a database that could be used to track and target Muslim Americans. Pressure started mounting last month when The Intercept began asking tech companies about the subject and only received a response from Twitter, which said it would never participate in such a project.
The situation then heightened this week when a Facebook spokesperson, who had initially refused to comment on the matter, accidentally emailed. The email compared any statement regarding the building of a Muslim registry to a “straw man” argument and suggested Facebook’s PR strategy should be to remain silent. BuzzFeed published the email, which then forced Facebook to issue a statement saying it had not been asked, nor would it agree, to helping build a Muslim registry.
Since Facebook’s embarrassing stumble, a number of other tech companies have gone on the record disavowing the highly controversial Trump campaign promise. Microsoft PR head Frank X. Shaw said in a statement given to BuzzFeed, “We oppose discrimination and we wouldn’t do any work to build a registry of Muslim Americans.” Both Microsoft CEO Satya Nadella and Alphabet chief Larry Page attended a summit with President-elect Donald Trump on Wednesday, as did Apple CEO Tim Cook and Uber CEO Travis Kalanick.

APPLE AND UBER BOTH WENT ON THE RECORD AFTER GOOGLE SPOKE UP.

Ride-hailing company Lyft, which like Uber could hypothetically be asked to hand over user travel data, said today it would refuse to participate with the government if it were asked for such data or other tools to build a Muslim registry. One notable exception here has been Oracle, the cloud computing giant that has in the past counted the National Security Agency as a client. The company declined to comment when asked about a Muslim registry or whether it still works with the NSA. In a separate event, Trump yesterday appointed Oracle CEO Safra Catz to the executive committee of his transition team.


Google Launches Cloud Bigtable, A Highly Scalable And Performant NoSQL Database....




With Cloud Bigtable, Google is launching a new NoSQL database offering today that, as the name implies, is powered by the company’s Bigtable data storage system, but with the added twist that it’s compatible with the Apache HBase API — which itself is based on Google’s Bigtable project. Bigtable powers the likes of Gmail, Google Search and Google Analytics, so this is definitely a battle-tested service
Google promises that Cloud Bigtable will offer single-digit millisecond latency and 2x the performance per dollar when compared to the likes of HBase and Cassandra. Because it supports the HBase API, Cloud Bigtable can be integrated with all the existing applications in the Hadoop ecosystem, but it also supports Google’s Cloud Dataflow.
Setting up a Cloud Bigtable cluster should only take a few seconds, and the storage automatically scales according to the user’s needs.

It’s worth noting that this is not Google’s first cloud-based NoSQL database product. With Cloud Datastore, Google already offers a high-availability NoSQL datastore for developers on its App Engine platform. That service, too, is based on Bigtable. Cory O’Connor, a Google Cloud Platform product manager, tells me Cloud Datastore focuses on read-heavy workload for web apps and mobile apps.
“Cloud Bigtable is much the opposite — is designed for larger companies and enterprises where extensive data processing is required, and where workloads are more complex,” O’Conner tells me. “For example, if an organization needs to stream data into, run analytics on and serve data out of a single database at scale – Cloud Bigtable is the right system. Many of our customers will start out on Cloud Datastore to build prototypes and get moving quickly, and then evolve towards services like Cloud Bigtable as they grow and their data processing needs become more complex.”
The new service is now available in beta, which means it’s open to all developers but doesn’t offer an SLA or technical support.

Big Data & Hadoop Career Analysis


Market research and advisory firm Ovum estimates the big data market will grow from $1.7 billion in 2016 to $9.4 billion by 2020. As the market grows, enterprise challenges are shifting, skills requirements are changing, and the vendor landscape is morphing. The coming year promises to be a busy one for big data pros. Here are some key predictions for big data in 2017 from industry watchers and technology players.
·         The era of ubiquitous machine learning has arrived
·         When data can’t move, bring the cloud to the data.
·         Applications, not just analytics, propel big data adoption.
·         The Internet of Things will integrate with enterprise applications.
·         Data virtualization will light up dark data.
·         A boom in prepackaged integrated cloud data systems.
·         Cloud-based object stores become a viable alternative to Hadoop HDFS.
·         Next-generation compute architectures enable deep learning at cloud scale.
·         Hadoop security is no longer optional.
·         Big data becomes fast and approachable.
·         Organizations leverage data lakes from the get-go to drive value
·         The convergence of IoT, cloud, and big data create new opportunities for self-service analytics.
·         Self-service data prep becomes mainstream as end users begin to shape big data
·         Self-service analytics extends to data prep Analytics will be everywhere, thanks to embedded BI.
·         IT becomes the data hero.
·         Artificial intelligence is back in vogue.
·         Companies focus on business-driven applications to avoid data lakes from becoming swamps.
·         Data agility separates winners and losers.
·         Block chain transforms select financial service applications


Contact Form

Name

Email *

Message *