DataRobot is acquiring Paxata to add data prep to machine learning platform

DataRobot, a company best known for creating automated machine learning models known as AutoML, announced today that it intends to acquire Paxata, a data prep platform startup. The companies did not reveal the purchase price.

Paxata raised a total of $90 million before today’s acquisition, according to the company.

Up until now, DataRobot has concentrated mostly on the machine learning and data science aspect of the workflow building and testing the model, then putting it into production. The data prep was left to other vendors like Paxata, but DataRobot, which raised $206 million in September, saw an opportunity to fill in a gap in their platform with Paxata.

“We’ve identified, because we’ve been focused on machine learning for so long, a number of key data prep capabilities that are required for machine learning to be successful. And so we see an opportunity to really build out a unique and compelling data prep for machine learning offering that’s powered by the Paxata product, but takes the knowledge and understanding and the integration with the machine learning platform from DataRobot,” Phil Gurbacki, SVP of product development and customer experience at DataRobot, told TechCrunch.

Prakash Nanduri, CEO and co-founder at Paxata, says the two companies were a great fit and it made a lot of sense to come together. “DataRobot has got a significant number of customers, and every one of their customers have a data and information management problem. For us, the deal allows us to rapidly increase the number of customers that are able to go from data to value. By coming together, the value to the customer is increased at an exponential level,” he explained.

DataRobot is based in Boston, while Paxata is in Redwood City, Calif. The plan moving forward is to make Paxata a west coast office, and all of the company’s almost 100 employees will become part of DataRobot when the deal closes.

While the two companies are working together to integrate Paxata more fully into the DataRobot platform, the companies also plan to let Paxata continue to exist as a standalone product.

DataRobot has raised more than $431 million, according to PitchBook data. It raised $206 million of that in its last round. At the time, the company indicated it would be looking for acquisition opportunities when it made sense.

This match-up seems particularly good, given how well the two companies’ capabilities complement one another, and how much customer overlap they have. The deal is expected to close before the end of the year.

Neural Magic gets $15M seed to run machine learning models on commodity CPUs

Neural Magic, a startup founded by an MIT professor, who figured out a way to run machine learning models on commodity CPUs, announced a $15 million seed investment today.

Comcast Ventures led the round with participation from NEA, Andreessen Horowitz, Pillar VC and Amdocs. The company had previously received a $5 million pre-seed, making the total raised so far, $20 million.

The company also announced early access to its first product, an inference engine that data scientists can run on computers running CPUs, rather than specialized chips like GPUs or TPUs. That means that it could greatly reduce the cost associated with machine learning projects by allowing data scientists to use commodity hardware.

The idea for this solution came from work by MIT professor Nir Shavit. As he tells it, he was working on neurobiology data in his lab and found a way to use the commodity hardware he had in place. “I discovered that with the right algorithms we could run these machine learning algorithms on commodity hardware, and that’s where the company started,” Shavit told TechCrunch.

He says that there is this false notion that you need these specialized chips or hardware accelerators to have the necessary resources to run these jobs, but he says it doesn’t have to be that way. He says his company, not only allows you to use this commodity hardware, it also works with more modern development approaches like containers and micro services.

“Our vision is to enable data science teams to take advantage of the ubiquitous computing platforms they already own to run deep learning models at GPU speeds in a flexible and containerized way that only commodity CPUs can deliver,” Shavit explained.

He says this also eliminates the memory limitations of these other approaches because CPUs have access to much greater amounts of memory, and this is a key advantage of his company’s approach over and above the cost savings.

“Yes, running on a commodity processor you get the cost savings of running on a CPU, but more importantly, it eliminates all of these huge commercialization problems and essentially this big limitation of the whole field of machine learning of having to work on small models and small data sets because the accelerators are kind of limited. This is the big unlock of Neural Magic,” he said.

Gil Beyda, Managing Director at lead investor Comcast Ventures sees a huge market opportunity with an approach that lets people use commodity hardware. “Neural Magic is well down the path of using software to replace high-cost, specialized AI hardware. Software wins because it unlocks the true potential of deep learning to build novel applications and address some of the industry’s biggest challenges,” he said in a statement.

Google Dex language simplifies array math for machine learning

Engineers at Google have unveiled Dex, a prototype functional language designed for array processing. Array processing is a cornerstone of the math used in machine learning applications and other computationally intensive work.

The chief goal for the Dex language, according to a paper released by Google researchers, is to allow programmers to work efficiently and concisely with arrays using a compact, functional syntax.

Existing math-and-stats languages and libraries, such as MATLAB and NumPy, already have widely used array processing techniques and syntaxes, as do more general purpose languages such as Fortran and C. But the paper’s authors were unhappy with the “obfuscated” feel of the former and the “heaviness” of the latter.

Dex, patterned after the Haskell and ML family of languages, uses type information to make writing code for processing arrays both succinct and explicit. Introductory Dex examples show how the type system works with both regular values (integers and reals) and arrays. Other examples show how to express common problems such as estimating pi or plotting a Mandelbrot fractal.

Like Python or the R language, Dex can run prewritten programs from the CLI, interactively in a REPL, or by way of a notebook-style interface. The current prototype supports all three modes.

Dex uses the LLVM language-compiler framework, which powers many general-purpose languages like Rust and Swift. LLVM is also proving useful for constructing domain-specific languages, or DSLs languages designed to ease the handling of a deliberately small set of tasks. Other LLVM-powered DSL projects for computational work include DLVM, a compiler for DSLs used in neural networks; and Triton, an intermediate language and compiler used for tiled neural networks.

4 Mistakes of Machine Learning Startups

Have you heard of the Darwin Awards? Hop on YouTube and take a look. It’s generally pretty funny stuff. It’s a tongue-in-cheek honor that recognizes people for the most sophisticated attempts to do something they think is cool. One takes a selfie with a wounded bear, another one screws a jet engine to a skate. These bold actions lead to fatal mistakes with dire consequences and funny comments. Spoiler alert sadly they all die. You don’t want your startup “to die” from the mistakes of machine learning.

For the past 25 years, I’ve seen thousands of times when a person makes errors but never when a machine makes a mistake. Today, a blunder in the learning projects can cost companies millions and several years of useless work. For this reason, the most common errors in machine learning related to data, metrics, validation, and technology are collected here.

  1. Data.

Chances to make a mistake working with data are rather high. It is easier to successfully pass a minefield than not to make a mistake while working with the data set. Moreover, there can be several common mistakes:

  • Unprocessed data. Unprocessed data is rubbish that will not allow you to be confident about the adequacy of the constructed model. Therefore, only pre-processed data should be the basis of any AI project.
  • Anomalies. To check data on deviations and anomalies and get rid of them. Getting rid of errors is one of the priorities of every machine learning project. The data may always be incomplete, incorrect, or some information may be lost for some period.
  • Lack of data. Perhaps, the easiest way is to conduct 10 experiments and get the result, but still not the most correct one. A small and unbalanced amount of data would drive to a conclusion far from the truth. So, if you need to train the network to distinguish spectacled penguins from spectacled bears, a couple of bears’ photos won’t fly. Even if there are thousands of penguins’ images.
  • Lots of data. Sometimes limiting the amount of data is the only correct solution. That is how you can get, for example, the most objective picture of human actions in the future. Our world and the human race are incredibly unpredictable. As a rule, to foretell someone’s response based on their behavior in 1998 is like reading tea leaves. The result, being quite the same, will be far from reality.
  1. Metrics

Accuracy is an essential metric in machine learning. However, senseless seeking absolute accuracy can become a problem for an AI project.  Particularly, if the goal is to create a predictive recommendation system. It is obvious that the accuracy can reach an incredible 99% if the grocery online-supermarket offers to buy milk. I bet a buyer will take it, and the recommendation system will work.  But I’m afraid he would buy it anyway thus there is little sense in such a recommendation. In the case of a city resident, who buys milk daily, it is an individual approach and promotion of goods (which the one didn’t have in the basket earlier) that matters in such systems.

  1. Validation

A child learning the alphabet gradually masters letters, simple words, and idioms. He learns and processes information at a certain level. At the same time, the analysis of scientific papers is incomprehensible for the toddler, although the words in the articles consist of the same letters that he learned.

The model of an AI project also learns from a specific data set.  However, the project won’t handle an attempt to check the quality of the model on the same data set.  To estimate the model, it is necessary to use specially selected for verification pieces of information that were not used in training. In such a way, one can achieve the most accurate model quality assessment.

  1. Technology

The choice of technology in an AI project is still a common mistake, leads if not to fatal, but serious consequences that influence the efficiency and time of the project deadline.

No wonder, you can hardly find a more hyped theme in machine learning than neural networks, due to its suitable-to-any-task universal algorithm. But this tool won’t be the most effective and the fastest for any task.

The brightest example is Kaggle competition. Neural networks do not always take the first place; on the contrary, random tree networks have more chances to win; it is primarily related to tabular data.

Neurons are more often used to analyze visual information, voice, and more complex data.
Using a neural network as a guide one can see, nowadays, it is the simplest solution. But at the same time, the project team should understand clearly what algorithms are suitable for a particular task.

I truly believe machine learning hype won’t be false, exaggerated, and ungrounded. Machine learning is another engineering tool that makes our life simpler and more comfortable, gradually changing it for the better.

For many massive projects, this article may be just a nostalgic retrospective about the mistakes they have already made but still managed to survive and overcome serious difficulties on the way to the product company.

But for those who are just starting their AI venture, this is an opportunity to understand why it isn’t the best idea to take a selfie with a wounded bear and how not to fill up the endless lists of “dead” startups.

Rahko raises £1.3M seed from Balderton for quantum machine learning tech

There remains a problem with the race to create a quantum computer, which is that experiments in this area can be extremely error-prone. Rahko is a new UK startup that thinks is can address this problem with what’s known as ‘Quantum machine learning’.

It’s now raised a £1.3M ($1.6M) in a seed round led by Balderton Capital, a rare move for a VC which normally only comes in at a Series A level. Joining the round is AI Seed and angel investors Charles Songhurst (former Microsoft Head of Corporate Strategy), Tom McInerney (Founder, TGM Ventures), John Spindler (CEO, Capital Enterprise) and James Field (CEO, LabGenius).

Rahko says it is building ‘quantum discovery’ capabilities for chemical simulation, which could enable groundbreaking advances in batteries, chemicals, advanced materials and drugs. It was started by cofounders Leonard Wossnig, Edward Grant, Miriam Cha and Ian Horobin.

Leo and Ed were longtime collaborators through their PhDs at University College London. They had been working on research in quantum machine learning (QML) with now lead developers Shuxiang Cao and Hongxiang Chen for several years and had been consolidating all their research into a QML platform.

They say the QML platform attracted serious attention from a tech giant and overtures were made. Leo and Ed made the decision not to give away control of the sum of their work, and decided instead to launch a business to commercialize it.

Chemical simulation is a vital capability for research that has not advanced significantly in recent years due to the limited computational power of classical computer. Rahko claims it has an arsenal of tools that may make quantum computers accessible and commercially usable at an accelerated pace, often through the use of hybrid approaches with classical computers.

Leo Wossnig, CEO, said: “Most people find quantum computers mysterious and wonder if they are going to save or break the world as we know it. In reality, quantum computing is going to unlock radical advances in areas of research and technology in which we have found ourselves stuck for some time now. Our team is excited to get together every day to work on problems that would have been impossible to solve only a couple of years ago. We are delighted to welcome on board this unique group of investors who truly share our excitement.” Earlier this year, Wossnig was the recipient of the prestigious 2019 Google Fellowship in Quantum Computing, for his achievement in computer science.

Lars Fjeldsoe-Nielsen, General Partner at Balderton Capital, said: “Rahko is one of the top teams in the world working on a complex space at the very edge of science and computing. The application of discoveries within quantum has already been profound and impacted our fundamental understanding of the world around us. The pace and rate of change in this field over the past few years has been astonishing, and we feel incredibly lucky to be supporting this exceptional team as they continue to push the boundaries of what’s possible.”

Rahko is one of several startups originating from UCL’s Computer Science programme, supported by Conception X, a venture builder for deep tech startups. It works in partnership with several of the world’s largest quantum hardware manufacturers, leading academic teams and national laboratories.

Wossnig added: “Quantum software is a relatively new field. It is growing very quickly but at this stage the field is small enough for us to know all of the best teams out there and be working with many of them. IBM and Microsoft, for instance, have large software teams but we are partners with both of them.”

The entire quantum computing industry is relying on quantum hardware maturing to a scale that will allow powerful, commercially valuable applications. It’s estimated this will be in 3-5 years. Until this happens it is a little premature to say definitively who is leading the race.

Vianai emerges with $50M seed and a mission to simplify machine learning tech

You don’t see a startup get a $50 million seed round all that often, but such was the case with Vianai, an early-stage startup launched by Vishal Sikka, former Infosys managing director and SAP executive. The company launched recently with a big check and a vision to transform machine learning.

Just this week, the startup had a coming out party at Oracle Open World, where Sikka delivered one of the keynotes and demoed the product for attendees. Over the last couple of years, since he left Infosys, Sikka has been thinking about the impact of AI and machine learning on society and the way it is being delivered today. He didn’t much like what he saw.

It’s worth noting that Sikka got his PhD from Stanford with a specialty in AI in 1996, so this isn’t something that’s new to him. What’s changed, as he points out, is the growing compute power and increasing amounts of data, all fueling the current AI push inside business. What he saw when he began exploring how companies are implementing AI and machine learning today was a lot of complex tooling, which, in his view, was far more complex than it needed to be.

He saw dense Jupyter notebooks filled with code. He said that if you looked at a typical machine learning model, and stripped away all of the code, what you found was a series of mathematical expressions underlying the model. He had a vision of making that model-building more about the math, while building a highly visual data science platform from the ground up.

The company has been iterating on a solution over the last year with two core principles in mind: explorability and explainability, which involves interacting with the data and presenting it in a way that helps the user attain their goal faster than the current crop of model-building tools.

“It is about making the system reactive to what the user is doing, making it completely explorable, while making it possible for the developer to experiment with what’s happening in a way that is incredibly easy. To make it explainable means being able to go back and forth with the data and the model, using the model to understand the phenomenon that you’re trying to capture in the data,” Sikka told TechCrunch.

He says the tool isn’t just aimed at data scientists, it’s about business users and the data scientists sitting down together and iterating together to get the answers they are seeking, whether it’s finding a way to reduce user churn or discover fraud. These models do not live in a data science vacuum. They all have a business purpose, and he believes the only way to be successful with AI in the enterprise is to have both business users and data scientists sitting together at the same table working with the software to solve a specific problem, while taking advantage of one another’s expertise.

For Sikka, this means refining the actual problem you are trying to solve. “AI is about problem solving, but before you do the problem solving, there is also a [challenge around] finding and articulating a business problem that is relevant to businesses and that has a value to the organization,” he said.

He is very clear, that he isn’t looking to replace humans, but instead wants to use AI to augment human intelligence to solve actual human problems. He points out that this product is not automated machine learning (AutoML), which he considers a deeply flawed idea. “We are not here to automate the jobs of data science practitioners. We are here to augment them,” he said.

As for that massive seed round, Sikka knew it would take a big investment to build a vision like this, and with his reputation and connections, he felt it would be better to get one big investment up front, and he could concentrate on building the product and the company. He says that he was fortunate enough to have investors who believe in the vision, even though as he says, no early business plan survives the test of reality. He didn’t name specific investors, only referring to friends and wealthy and famous people and institutions. A company spokesperson reiterated they were not revealing a list of investors at this time.

For now, the company has a new product and plenty of money in the bank to get to profitability, which he states is his ultimate goal. Sikka could have taken a job running a large organization, but like many startup founders, he saw a problem, and he had an idea how to solve it. That was a challenge he couldn’t resist pursuing.

Machine learning operations don’t belong with cloudops

It’s Monday morning, and after a long weekend of system trouble the cloud operations team is discussing what happened. It seems that several systems that were associated with a very advanced, new inventory management system enabled with machine learning had issues over the weekend. The postmortem concluded the following:

  • The batch process that moved raw data from the operational database to the training database failed, as well as the auto recovery process. An ops team member who was working over the weekend attempted to resubmit but caused not one, but four partial updates that left the training database in an unstable state.
  • This caused the knowledge models in the machine learning systems to train with bad data and required that the new information in the knowledge base be removed and the models rebuilt.
  • Also, several outside data feeds, such as pricing and tax data, were updated at the same time to the training database. Although those worked fine, they too needed to be backed out of the knowledge database considering that the operational data was not in a good state.
  • The system was unavailable for two days and the company lost $4 million, considering lost productivity, customer reactions, and PR issues.

This is not 2025; this is today. As enterprises find more uses for “cheap and good” cloud-based machine learning systems we’re finding that the systems that leverage machine learning are complex to operate. The ops teams do not expect the degree of difficulty and the complexity and are finding that they are undertrained, understaffed, and underfunded.

The assumption is that the cloud operations teams could handle cloud-based databases, cloud-based storage, and cloud-based compute with a fairly easy transition. For the most part that’s been the case, considering that cloud-based systems are similar to traditional systems.

However, systems based on machine learning have not yet been seen for the most part by operations teams. These systems have specialized purposes, as well as specialized systems such as databases and knowledge engines that have to be monitored and managed in certain ways. This is where the current operations teams are failing.

The fix is pretty easy to understand, but most enterprises are not going to like it, considering it means spending more dollars for ML cloudops or abandoning ML cloudops. Machine learning systems are technological chainsaws. If used carefully, they are highly effective. If mishandled they can be dangerous. Failures can go undetected, and if the system automatically uses the resulting bad knowledge, you could end up with huge issues that may not be discovered until much damage is done. More risk than reward, it seems.

Automated machine learning or AutoML explained

The two biggest barriers to the use of machine learning (both classical machine learning and deep learning) are skills and computing resources. You can solve the second problem by throwing money at it, either for the purchase of accelerated hardware (such as computers with high-end GPUs) or for the rental of compute resources in the cloud (such as instances with attached GPUs, TPUs, and FPGAs).

On the other hand, solving the skills problem is harder. Data scientists often command hefty salaries and may still be hard to recruit. Google was able to train many of its employees on its own TensorFlow framework, but most companies barely have people skilled enough to build machine learning and deep learning models themselves, much less teach others how.

What is AutoML?

Automated machine learning, or AutoML, aims to reduce or eliminate the need for skilled data scientists to build machine learning and deep learning models. Instead, an AutoML system allows you to provide the labeled training data as input and receive an optimized model as output.

There are several ways of going about this. One approach is for the software to simply train every kind of model on the data and pick the one that works best. A refinement of this would be for it to build one or more ensemble models that combine the other models, which sometimes (but not always) gives better results.

A second technique is to optimize the hyperparameters (explained below) of the best model or models to train an even better model. Feature engineering (also explained below) is a valuable addition to any model training. One way of de-skilling deep learning is to use transfer learning, essentially customizing a well-trained general model for specific data.

What is hyperparameter optimization?

All machine learning models have parameters, meaning the weights for each variable or feature in the model. These are usually determined by back-propagation of the errors, plus iteration under the control of an optimizer such as stochastic gradient descent.

Most machine learning models also have hyperparameters that are set outside of the training loop. These often include the learning rate, the dropout rate, and model-specific parameters such as the number of trees in a Random Forest.

Hyperparameter tuning or hyperparameter optimization (HPO) is an automatic way of sweeping or searching through one or more of the hyperparameters of a model to find the set that results in the best trained model. This can be time-consuming, since you need to train the model again (the inner loop) for each set of hyperparameter values in the sweep (the outer loop). If you train many models in parallel, you can reduce the time required at the expense of using more hardware.

What is feature engineering?

A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. A feature vector combines all of the features for a single row into a numerical vector. Feature engineering is the process of finding the best set of variables and the best data encoding and normalization for input to the model training process.

Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis (PCA) to convert correlated variables into a set of linearly uncorrelated variables.

To use categorical data for machine classification, you need to encode the text labels into another form. There are two common encodings.

One is label encoding, which means that each text label value is replaced with a number. The other is one-hot encoding, which means that each text label value is turned into a column with a binary value (1 or 0). Most machine learning frameworks have functions that do the conversion for you. In general, one-hot encoding is preferred, as label encoding can sometimes confuse the machine learning algorithm into thinking that the encoded column is ordered.

To use numeric data for machine regression, you usually need to normalize the data. Otherwise, the numbers with larger ranges might tend to dominate the Euclidian distance between feature vectors, their effects could be magnified at the expense of the other fields, and the steepest descent optimization might have difficulty converging. There are a number of ways to normalize and standardize data for machine learning, including min-max normalization, mean normalization, standardization, and scaling to unit length. This process is often called feature scaling.

Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious.

What is transfer learning?

Transfer learning is sometimes called custom machine learning, and sometimes called AutoML (mostly by Google). Rather than starting from scratch when training models from your data, Google Cloud AutoML implements automatic deep transfer learning (meaning that it starts from an existing deep neural network trained on other data) and neural architecture search (meaning that it finds the right combination of extra network layers) for language pair translation, natural language classification, and image classification.

That’s a different process than what’s usually meant by AutoML, and it doesn’t cover as many use cases. On the other hand, if you need a customized deep learning model in a supported area, transfer learning will often produce a superior model.

AutoML implementations

There are many implementations of AutoML that you can try. Some are paid services, and some are free source code. The lists below are by no means complete or final.

AutoML services

All of the big three cloud services have some kind of AutoML. Amazon SageMaker does hyperparameter tuning but doesn’t automatically try multiple models or perform feature engineering. Azure Machine Learning has both AutoML, which sweeps through features and algorithms, and hyperparameter tuning, which you typically run on the best algorithm chosen by AutoML. Google Cloud AutoML, as I discussed earlier, is deep transfer learning for language pair translation, natural language classification, and image classification.

A number of smaller companies offer AutoML services as well. For example, DataRobot, which claims to have invented AutoML, has a strong reputation in the market. And while dotData has a tiny market share and a mediocre UI, it has strong feature engineering capabilities and covers many enterprise use cases. H2O.ai Driverless AI, which I reviewed in 2017, can help a data scientist turn out models like a Kaggle master, doing feature engineering, algorithm sweeps, and hyperparameter optimization in a unified way.

AutoML frameworks

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. Auto-Keras is an open source software library for automated machine learning, developed at Texas A&M, that provides functions to automatically search for architecture and hyperparameters of deep learning models. NNI (Neural Network Intelligence) is a toolkit from Microsoft to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or a complex system’s parameters in an efficient and automatic way.

You can find additional AutoML projects and a fairly complete and current list of papers about AutoML on GitHub.

5 machine learning tools to ease software development

Most discussions of developers making use of machine learning revolve around creating AI-powered applications and the tools used to create them: TensorFlow, PyTorch, Scikit-learn, and so on.

But there is another way machine learning is impacting software development: by way of new development tools that use machine learning techniques to make programming easier and more productive. Here are five projects three commercial, two experimental that put machine learning to work for developers within the development process.

Kite

Kite is a code completion tool, available for most major code editors, that uses machine learning techniques to fill in your code as you’re typing it.

The machine learning model used by Kite is created by taking publicly available code on GitHub, deriving an abstract syntax tree from it, and using that as a basis for the model. According to Kite, this allows auto-suggestion and auto-completion to be derived from the context and intention of the code, rather than just the text.

Someone Transformed Windows XP Into A Bitcoin Blockchain Machine

Recently, a developer called ‘sh1zuku’ managed to run Windows XP on a Bitcoin SV Blockchain and put in on the web as a website. According to the reports from CoinGeek, the latest creation is still a limited version of Windows XP, and lots of programs won’t let you get past the first screen.

Well, if we look around, we will find that Windows is right now powering a majority of the desktop operating system. Talking about the older version of Windows, Windows XP seems to be the most popular one, but its have gone a long time ago. Due to its immense popularity, we get to see the glimpse of the old operating system every now and then.

Recently, a developer called ‘sh1zuku’ managed to run Windows XP on a Bitcoin SV Blockchain and put in on the web as a website. For those who don’t know, Bitcoin SV (Satoshi’s Vision) is one of the popular blockchain created from Bitcoin Cash. It’s a hard fork of Bitcoin, and it has been there on the list of top 20 crypto coins.

According to the reports from CoinGeek, the latest creation is still a limited version of Windows XP, and lots of programs won’t let you get past the first screen. For instance, if you open the Local Drive: C from the File Explorer, nothing will happen. Apart from that, while attempting other tasks, the emulated Windows XP leads to an error.

However, the version of Windows XP let users play the popular game Minesweeper, listen to the selected tracks via Winamp music player, draw something on the Paint application, etc. The Internet Explorer and File Explorer are essentially limited to a single page only.

Although it was not the first time someone emulated Windows XP via the web, Coingeek called particular attention to this project because it showcased the utility offered by Bitcoin SV’s “ability to routinely handle blocks that are substantially larger than what are found with other blockchains”

So, what do you think about this? Share your views in the comment box below.