Chipmaker Broadcom to buy VMware in $61 bln deal | Reuters

In one fell swoop, the deal will almost triple Broadcom’s software-related revenue to about 45% of its total sales.Broadcom will instantly be validated as a major software player with the acquisition of VMware, Futurum Research analyst Daniel Newman said.

“Having something like VMware … will have a significant number of doors open that their current portfolio probably doesn’t open for them,” Newman added.

Chipmaker Broadcom to buy VMware in $61 bln deal | Reuters

TensorFlow & Julia on a Jetson Nano

A few months ago I bought a Jetson Nano as a Christmas present for myself. I promptly got busy with work and life and forgot to set up until a month ago. I followed the instructions on how to flash the unit and got Ubuntu 18 up and running. That was the easy part, it was when I wanted to install Julia, TensorFlow, and Python that it got less easy.

My goal was to use TensorFlow, Julia, and Python3 to build all kinds of GPU-enabled projects. I plan on hacking this stuff together over time but I ended up spending one evening weeding through too many different tutorials and Stackoverflow threads than I care to count. Below is a short and easy set of steps that I used to install them.

Installing Julia

DO NOT install the binaries if you want to install Julia Language on the Jetson Nano. You are better off compiling from source. This took me several trials with several errors to figure this out because I was lazy. I wanted to just grab the binaries and be done with it. I also learned the hard way that if you download the binaries you get the latest released version. This could present problems with some packages NOT working because they haven’t caught up to the latest Julia. As of the writing of this article, the current Julia version is 1.4.0. I opted to get the 1.3.0 version and so far so good.

I went to Julia Language Github site, did a ‘git clone git://github.com/JuliaLang/julia.git’ into a $HOME/julia directory, and then did a ‘git checkout v1.3.0’ to get the version I wanted.

$ mkdir julia

$ cd julia

$ ~/julia git clone git://github.com/JuliaLang/julia.git

$ ~/julia git checkout v1.3.0

$ ~/julia make


After that, it was a simple ‘make’ and then I wanted for about an hour. I did run ‘make testall’ to check if everything was installed correctly and it took several hours to run. It did not complete because I canceled it after 2 hours. I know, I’m living dangerously here.


$ ~/julia make testall


Why Julia?

Honestly, I had problems installing TensorFlow on the first go around and wanted to use the TensorFlow.jl package. Once again, I was being lazy. This turned out to be a problem and I haven’t figured out what the reason is. I suspect why it’s not working has to do with Jetpack, Julia, and TensorFlow.jl versions. Bummer.

However, all is not lost with using the Nano’s GPU. There are other great libraries for Julia called CUDANative.jl. I discovered this library (and others) via NVIDIA’s Dev blog here. I used the first example in the blog post in Julia and got the GPU to respond. While I might not be able to use TensorFlow with Julia just yet, this is a good thing as it will allow me to explore GPUs on the Nano my Julia install.

Python3 and TensorFlow

I originally installed Python3 a while back on the Nano and then tried to install TensorFlow by doing a ‘sudo pip3 install TensorFlow‘. This did not work for me as I was missing several dependencies. It wasn’t until I found this blog post on Deeplearning Frameworks from NVIDIA that led to a successful TensorFlow install. Now I have it up and running and training a simple LSTM hAIku generator as a test. So far so good.

Other Tips to Avoid Hair Loss

The Jetson Nano’s onboard memory is really small, only 4GB. Normally this should be ok but there are times when you’re setting up your deep learning experiment where it will cause an Out Of Memory (OOM) error. I’ve had to make my sequence lengths smaller when using the LSTM model so that it could fit into the onboard memory.

Trying to work around this led me to try resizing my swap file. By default, the Nano flashes about 2GB of swap size. I decided to double that to 4GB by using the ‘resizeSwapMemory‘ python package. This helped initially but I kept getting OOM errors until I changed the sequence length.

Another useful item is Jtop. Jtop is just a CPU and GPU monitor that I use when I’m running an experiment on the Nano.

End Notes

If you have any questions or figured out how to use TensorFlow.jl, please drop me a comment or reach out to me on Twitter. Good luck!

Update in 2021

I recently upgraded the version of TensorFlow on the Nano to version 2.3.1. At first, I was seduced by just downloading the WHL file for it and when I installed it, it didn’t find my GPU. So I had to compile everything from scratch. After that, it worked.

However, it took 56 hours. Yes! 56. Freaking. Hours.

Then it worked perfectly. So if you want to do it, just follow the “From Scratch” instructions here.

H2O.ai Empowers MarketAxess to Innovate and Inform Trading Strategies – Bloomberg

“H2O is an integral part of Composite+ and provides some of the fundamental machine learning tools and support that make our algorithms run as well as they do,” said David Krein, Global Head of Research at MarketAxess. “The Composite+ pricing engine is helping fulfill our clients’ critical liquidity needs with more accurate and timely pricing data, which we make available within the MarketAxess electronic trading workflow. H2O.ai has been a great partner which has contributed to our recent success.”

H2O.ai Empowers MarketAxess to Innovate and Inform Trading Strategies – Bloomberg

Quantum Computing | Bojan Tunguz, Ph.D.

Until very recently it has only been possible to create systems comprising of a handful of quantum bits (qubits). Regrettably, many such systems (superconductors, trapped ions, nitrogen-vacancy in diamonds) are hard to manufacture and scale. Another approach to qubits, electrons trapped in silicon, is particularly promising for large scale fabrication. We have over three quarters of a century worth of experience with manufacturing large-scale silicon-based systems, which underlie all of modern computing. Unfortunately, silicon-based systems have had issues with error-correcting mechanism, which made them unsuitable for the large-scale computer systems – until now.

Quantum Computing | Bojan Tunguz, Ph.D

I’m going to have to say that I’m a complete Quantum Computing (QC) NOOB. I defer any expertise to the people who understand it better than I do, but I do know that Quantum Computing is a game-changer. I also know that it’s notoriously hard to make QC readily accessible for everyone to use.

Imagine my surprise when my former colleague Bojan makes this post about advances in “porting” QC to silicon chips. If this works it’ll be like taking near zero Kelvin supercomputing to room temperature mass-produced silicon chips.

We live in amazing times.

Francisco Partners scoops up bulk of IBM’s Watson Health unit | TechCrunch

In what has to be considered an anticlimactic ending, IBM sold off the data assets of its Watson Health unit to private equity firm Francisco Partners today. The two firms did not share the purchase price, but previous reports pegged the value at around $1 billion.

Francisco Partners scoops up bulk of IBM’s Watson Health unit | TechCrunch

Ooof! That’s gotta hurt.

IBM is making this sale just as the healthcare vertical is heating up. Last year, Oracle bought health records company Cerner for $28 billion and Microsoft bought Nuance Communications in a deal valued at nearly $20 billion. While both deals are pending regulatory approval, it shows how much large companies value the health vertical.

As a result, it was a move that clearly caught Patrick Moorhead, principal analyst at Moor Insights & Strategy, by surprise. “I am very surprised because the puck is moving to more vertical solutions. I suppose it also shows potentially how poorly the unit was doing.”

Yes, I’m seeing a verticalization of products and services. It’s funny how this pendulum swings. First, it’s on-premise, now it’s cloud, and it might go back to on-premise again in the future.

Elastic CEO reflects on Amazon spat, license switch, and the principles of open source | VentureBeat

“As a company, we never treated open source as a business model — open source is not a business model,” he said. “The first principle of open source is around engaging on GitHub, for example — you use open source to engage with the community, you use open source as a way to create communities, you use open source to collaborate with people.”

Elastic CEO reflects on Amazon spat, license switch, and the principles of open source | VentureBeat

Open Source isn’t free

As a last attempt to keep the lights on, I am switching the mode I am providing support for PLC4X: I am no longer implementing features users might need, I am no longer instantly fixing bugs for free. Especially I will not invest my private money to buy expensive hardware in order to implement or fix stuff I am then giving away for free.

blog/free-trial-expired.adoc at main · chrisdutz/blog · GitHub

Open Source Exploitation

Elastic Search NV, the company behind Elastic Search and its companion product Kibana changed its license from Apache 2.0 (fully open source) to a Server Side Public License (SSPL) and the Elastic License in 2021.

SSPL is often described as a ‘faux open source’ license and the Elastic License dictated that its software is limited in the following ways:

You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.

You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key. – via Elastic License

This change was done because Amazon was selling their open-source software as a service (SAAS) and not “sharing in the revenues.”

This happens quite a bit with the open-source world if what you and your team create is really good. You and your creators push out the software, it gets adopted all over the place, and you start getting support questions. So you start selling enterprise support but only a handful of people pay for it.

The reality is that giant organizations exploit your kindness, take your work, bundle it into a platform or other software, and then sell it to people, and never pay you for support. If there’s a bug in the software they’ll ask you to fix it for free first.

So what happened when Elastic Search changed its license? All hell broke loose for a while.

CEO Shay Banon was asked about what to expect and his response was great:

“we totally expected it to happen,” Banon said — but Elastic had already bolstered its commercial offering to protect it against any future open source kerfuffles. – via VentureBeat

Amazon forked the repo and said they’re going to build their own version of Elastic Search and called it Open Search. Some Elastic Search users did not make the move either and forked the repository too. There was even discussion amongst the group of companies on how to “co-develop” this going forward.

Other users of the ElasticSearch ecosystem, including Logz.io, CrateDB and Aiven, also committed to the need for a fork, leading to discussion of how to coordinate the open source efforts – via Wikipedia

This cracks me up but makes me mad at the same time. Here are the same companies that relied on Elastic Search and have been exploiting that software for free and the countless hours that software developers put in, now scrambling to push off the development costs a “consortium.”

LOL. That’s all I have to say.

Here’s the simple thing, Open Source is not free. If you’re a multi-billion company and use open source technology in your products or offer a service with it, you should pay. You should buy Enterprise Support or you should provide royalties or some other agreed-upon compensation for everyone’s hard work.

Corporations have gotten used to exploiting many powerful open source products and not giving back anything to the community or those developers. The nickel and dime you to death and then expect you to fix bugs for free, as the lead developer of the Apache PLC4X project explains:

As a last attempt to keep the lights on, I am switching the mode I am providing support for PLC4X: I am no longer implementing features users might need, I am no longer instantly fixing bugs for free. Especially I will not invest my private money to buy expensive hardware in order to implement or fix stuff I am then giving away for free. – via Github

The moment you stand up and say, “what a minute, this isn’t fair, I’m being exploited” the exploiters throw a hissy fit, grab their ball and go home.

Here’s what I say, feel free to fork it. Feel free to build whatever “consortium” you want, but it’s going to be a futile effort. You won’t be able to use the original name in your forked product or service, so you won’t get brand recognition. You won’t have an in-depth understanding of the codebase or the years of experience that the developers, maintainers, and makers have.

Don’t void the social contract that is open source, if you get value from it then put money into the pot.

%d bloggers like this: