The Linaro Developer Cloud has gone live, and users can apply to test an ARM-based server with Linux
Just one day after the announcement of the GA release of the Linux 4.7 kernel, the SparkyLinux developers inform their users that they can now test drive the new kernel from the unstable repository.
Today, July 26, 2016, Softpedia was informed by the Clear Linux team about the availability of new software updates for the GNU/Linux operating system designed for the Intel architecture.
With nine days to go, Microsoft really, really wants you to claim your free upgrade to Windows 10. Come to think of it, Microsoft has really, really wanted you to upgrade your Windows 7 or 8.1 PC to Windows 10 for more than a year, and backed it with the GWX subsystem -- first installed by KB 3035583 in March 2015, 15 months ago.
In case you ever wanted to have a Node.js window manager, there's now one that works for X11 environments that works on Chrome OS, Debian, and friends.
After working for several weeks on our WikiRating:Google Summer of Code project Davide, Alessandro and I have slowly reached up to the level where we can now visualize the entire project in its final stages.
More than a week ago I blogged about the new GUI made with GtkBuilder and Glade . Now, I will talk about what has changed since then with the GUI and also the new functionality that has been added to it.
I will start with the new "transition" page which I've added for the key download phase. Before going more in depth, I have to say that the app knows at each moment in what state it is, which really helps in adding more functionality.
During the last weeks, the openSUSE board and others expressed their concern about the current state of some openSUSE infrastructure: especially the reaction times to change something in the setup were mentioned multiple times. Looks like we lost some administrators and/or contact points at SUSE who helped out in the past to eliminate problems or work together with the community.
As result, there was a meeting held during the openSUSE Conference 2016, including some SUSE employees and openSUSE community members to discuss the current situation and search for some possible solutions. The discussion was very fruitful and we’d like to share some of the results here to inform everyone and actively ask for help. If you want to join us, the openSUSE heroes, do not hesitate to contact us and join an incredible team!
Docker is growing by leaps and bounds, and along with it, its ecosystem. Being light, the predominant container deployment involves running just a single app or service inside each container. Most software products and services are made up of at least several such apps/services. We all want all our apps/services to be highly available and fault tolerant. Thus, Docker containers in an organization quickly start popping up like mushrooms after the rain. They multiply faster than rabbits.While, in the beginning, we play with them like cute little pets, as their numbers quickly grow we realize we are dealing with a herd of cattle, implying we’ve become cowboys. Managing a herd with your two hands, a horse, and a lasso will only get you so far. You won’t be able to ride after each and every calf that wonders in the wrong direction. To get back to containers from this zoological analogy—operating so many moving pieces at scale is impossible without orchestration—this is why we’ve seen the rise of Docker Swarm, Kubernetes, Mesos, CoreOS, RancherOS, and so on.
A massive transformation is underway in the way we manage IT infrastructure. More companies are looking for improved agility and flexibility. They are moving from traditional server stacks to cloudy infrastructure to support a new array of applications and services that must be delivered at breakneck pace in order to remain competitive.
Yet Bob does not believe the devops hammer should be used on anything that looks remotely like a nail. Accounting systems, supply chain management systems, warehouse management systems, and so on do not benefit from the constant modification enabled by devops. Those are bound by precise, interlocking processes along with granular permissions and regulations. Here, continuous change invites disaster of the type that ITIL-huggers and OCM (organizational change management) proponents fear most.
Linux Kernel 4.7 was released this week with a total of 36 contributions from five Collabora engineers. It includes the first contributions from Helen as Collaboran and the first ever contributions on the kernel from Robert Foss. Here are some of the highlights of the work Collabora have done on Linux Kernel 4.7.
Enric added support for the Analogix anx78xx DRM Bridge and fixed two SD Card related issues on OMAP igep00x0: fix remove/insert detection and enable support to read the write-protect pin.
Gustavo de-staged the sync_file framework (Android Sync framework) that will be used to add explicit fencing support to the graphics pipeline and started a work to clean up usage of legacy vblank helpers.
For users who are running some form of Linux, this should come as welcome news--the final version of the Linux Kernel 4.7 is now finally released. Linux founder Linus Torvalds said of the announcement, “Despite it being two weeks since rc7, the final patch wasn’t all that big, and much of it is trivial one- and few-liners. There’s a couple of network drivers that got a bit more loving.”
OpenVZ, a long-standing Linux virtualization technology and similar to LXC and Solaris Containers, is out with their major 7.0 release.
OpenVZ 7.0 has focused on merging the OpenVZ and Virtuozzo code-bases along with replacing their own hypervisor with that of Linux's KVM. Under OpenVZ 7.0, it has become a complete Linux distribution based upon VzLinux.
I’m pleased to announce the release of OpenVZ 7.0. The new release focuses on merging OpenVZ and Virtuozzo source codebase, replacing our own hypervisor with KVM.
Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.
In the preceding post, I explained the use cases for the FreeIPA lightweight sub-CAs feature, how to manage CAs and use them to issue certificates, and current limitations. In this post I detail some of the internals of how the feature works, including how signing keys are distributed to replicas, and how sub-CA certificate renewal works. I conclude with a brief retrospective on delivering the feature.
Last year FreeIPA 4.2 brought us some great new certificate management features, including custom certificate profiles and user certificates. The upcoming FreeIPA 4.4 release builds upon this groundwork and introduces lightweight sub-CAs, a feature that lets admins to mint new CAs under the main FreeIPA CA and allows certificates for different purposes to be issued in different certificate domains. In this post I will review the use cases and demonstrate the process of creating, managing and issuing certificates from sub-CAs. (A follow-up post will detail some of the mechanisms that operate behind the scenes to make the feature work.)
The second Armadillo release of the 7.* series came out a few weeks ago: version 7.200.2. And RcppArmadillo version 0.7.200.2.0 is now on CRAN and uploaded to Debian. This followed the usual thorough reverse-dependecy checking of by now over 240 packages using it.
For once, I let it simmer a little preparing only a package update via the GitHub repo without preparing a CRAN upload to lower the update frequency a little. Seeing that Conrad has started to release 7.300.0 tarballs, the time for a (final) 7.200.2 upload was now right.
Just like the previous, it now requires a recent enough compiler. As g++ is so common, we explicitly test for version 4.6 or newer. So if you happen to be on an older RHEL or CentOS release, you may need to get yourself a more modern compiler. R on Windows is now at 4.9.3 which is decent (yet stable) choice; the 4.8 series of g++ will also do. For reference, the current LTS of Ubuntu is at 5.4.0, and we have g++ 6.1 available in Debian testing.
For a while now, we, Fedora QA, have been busy with building Taskotron core features and didn’t have much resources for additions to the tasks that Taskotron runs. That changed a few weeks back when we started running task-dockerautotest, task-abicheck and task-rpmgrill tasks in our dev environment. Since we have been happy with the results of having run those tasks, we deployed them to the production instance as well last week. Please note that the results of those tasks are informative only. Lets introduce the tasks briefly:
I have a long overdue blog entry about what happened in recent times. People that follow my tweets did catch some things. Most noteworthy there was the Trans*Inter*Congress in Munich at the start of May. It was an absolute blast. I met so many nice and great people, talked and experienced so many great things there that I'm still having a great motivational push from it every time I think back. It was also the time when I realized that I in fact do have body dysphoria even though I thought I'm fine with my body in general: Being tall is a huge issue for me. Realizing that I have a huge issue (yes, pun intended) with my length was quite relieving, even though it doesn't make it go away. It's something that makes passing and transitioning for me harder. I'm well aware that there are tall women, and that there are dedicated shops for lengthy women, but that's not the only thing that I have trouble with. What bothers me most is what people read into tall people: that they are always someone they can lean on for comfort, that tall people are always considered to be self confident and standing up for themselves (another pun, I know ... my bad).
This particular week has been tiresome as I did catch a cold . I did come back from Cape Town where debconf taking place. My arrival at Montreal was in the middle of the week, so this week is not plenty of news…
I became interested in running Debian on NVIDIA's Tegra platform recently. NVIDIA is doing a great job getting support for Tegra upstream (u-boot, kernel, X.org and other projects). As part of ensuring good Debian support for Tegra, I wanted to install Debian on a Jetson TK1, a development board from NVIDIA based on the Tegra K1 chip (Tegra 124), a 32-bit ARM chip.
Twitter may finally have found the feature that gets people excited about its service again: night mode. Twitter is launching a night mode feature on Android today that switches most of the interface from white to a really deep blue. It looks nice, and I have no doubt that a lot of people will use it, because everyone seems to love a good night mode.
The FCC is worried. You and they spend all this time and energy getting your radio certified, and then some bozo hacks in, changes how the radio works, and puts you out of spec.
And so, back in early 2015, the FCC issued some guidelines or questions regarding WiFi devices – particularly home routers – in an effort to ensure that your radio isn’t hackable.
The result has been that some router makers have simply locked down the platform so that it’s no longer possible to do after-market modifications, and this has caused an outcry by after-market modifiers. The reason why it’s an issue is that these open-source developers have used the platform for adding apps or other software that, presumably, have nothing to do with the radio.
In an attempt to find the magic middle way, the prpl organization, headed by Imagination Technologies (IMG) and featuring the MIPS architecture, recently put out a proof of concept that they say gives both assurance to the FCC and freedom to open-source developers.
Questions from the FCC
Communications startup Wire has open-sourced the full codebase for its Wire app, so it's easier for developers to build their own encrypted messaging clients.
Wire open-sourced the rest of the client base that wasn't initially publicly available, including components related to the user interface, the web and native clients, and some internal developer tools. The company always planned to open-source the codebase, but didn't start out that way initially "because we were still working on other features," Alan Duric, co-founder and CTO of Wire, wrote in a Medium post.
Members of the OpenStack Foundation have been voting on upcoming release names and the results are now in.
Today’s interview is with David Egts, chief technologist, North America Public Sector at Red Hat. Red Hat has been around for twenty-five years and has hit over two billion on annual revenue. Topics range from open source to partnering with Microsoft to the up and coming DevNationFederal.
In the federal government circles, Red Had made a big splash years ago by working with NASA to have incredibly fast systems. Red Hat has expanded so much in the past decade that the conversation with Egts didn’t even get to NASA.
Last month, the Austrian State Secretary Muna Duzdar handed out the 'Oscars of the Open Data Community'. The awards were part of the 'open4data.at challenge 2016' organised earlier this year. The annual challenge aims to bring open data and ideas together in innovative and creative solutions.
After the two earthquakes that caused multiple casualties and widespread damage in the Italian region of Emilia-Romagna in 2012, multiple programmes were launched to reconstruct the affected areas. To make these efforts more transparent, a team from the Gran Sasso Science Institute last week presented an Open Data platform that will provide all information on who is responsible, which company is doing what, and how the money is being spent.
The 'Open Data Ricostruzione' initiative was presented last week at the Italian Festival of Participation. The platform will bring together all the numbers, figures and information on the reconstruction, and allow visitors to visualise, filter, track and map the available data. All information will be made available as open data, in the original database format as well as JSON.
Soon there will be four years since I started working on AArch64 architecture. Lot of software things changed during that time. Lot in a hardware too. But machines availability still sucks badly.
In 2012 all we had was software model. It was slow, terribly slow. Common joke was AArch64 developers standing in a queue for 10GHz x86-64 cpus. So I was generating working binaries by using cross compilation. But many distributions only do native builds. In models. Imagine Qt4 building for 3-4 days…
In 2013 I got access to first server hardware. With first silicon version of CPU. Highly unstable, we could use just one core etc. GCC was crashing like hell but we managed to get stable build results from it. Qt4 was building in few hours now.
Last year I had open source instruction set RISC-V running Linux emulated in qemu. However to really get into the architecture, and restore my very rusty FPGA skills, wouldn’t it be fun to have RISC-V working in real hardware.
The world of RISC-V is pretty confusing for outsiders. There are a bunch of affiliated companies, researchers who are producing actual silicon (nothing you can buy of course), and the affiliated(?) lowRISC project which is trying to produce a fully open source chip. I’m starting with lowRISC since they have three iterations of a design that you can install on reasonably cheap FPGA development boards like the one above. (I’m going to try to install “Untether 0.2” which is the second iteration of their FPGA design.)
A bounty-hunter has gone public with a complete howler made by Vine, the six-second-video-loop app Twitter acquired in 2012.
According to this post by @avicoder (Vjex at GitHub), Vine's source code was for a while available on what was supposed to be a private Docker registry.
While docker.vineapp.com, hosted at Amazon, wasn't meant to be available, @avicoder found he was able to download images with a simple pull request.
America's National Institute for Standards and Technology has advised abandonment of SMS-based two-factor authentication.
That's the gist of the latest draft of its Digital Authentication Guideline, here. Down in section 184.108.40.206, the document says out-of-band verification using SMS is deprecated and won't appear in future releases of NIST's guidance.
Point Linux released their newest version, 3.2, in June 2016. Their goal is, "To combine the power of Debian GNU/Linux with the productivity of MATE, the GNOME 2 desktop environment fork. Point Linux provides an easy-to-set-up-and-use distribution for users looking for a fast, stable and predictable desktop."
Point Linux aims to use MATE as their primary desktop environment, but also offers Xfce as an option. The Point Linux website is simple and professional. The download page is full of fresh and very nice options that allow the user to download the exact distro they require to fit their needs. Some of the options include 32- or 64-bit, torrent or direct download, and the location of the download server. I found using the website was effortless and the options available cut down on the download time (by giving the option to torrent or the location of the server) and lowered the install time by giving the consumer options before retrieving the whole file.
The MATE desktop environment (DE) is available in the standard Debian installation media, but the full Debian installer image is 4.7GB, overwhelmingly large, and has too many DE options to make the disc any smaller. This is the small void that Point Linux fills. They provide the MATE desktop environment (or Xfce) and a significantly smaller live OS / installation media. Even when selecting the full featured desktop from the options on their website, the Point Linux installer is only 1.00GB. The "Desktop with core components" option lowers this installation media size further to 772MB.
As mentioned in today's This Week in Servo newsletter, their Q3 roadmap plans have been published.
In late 2014, many observers were flummoxed to see that Yahoo and Mozilla had announced a "strategic five-year partnership" agreement which would make Yahoo the primary search option for Firefox. Mozilla was up for renewal negotiations for its deal with Google, which had historically subsidized more than 90 percent of Mozilla's revenues, to the tune of more than $300 million per year at times. In return, for lots of money, Google got primary search placement in the Firefox browser over the years.
Last week, though, Verizon,announced its intention to purchase Yahoo for $4.8 billion. What are the implications for Mozilla and its deal? Here are the details.
The Stardew Valley developer tweeted out a password for a beta, but after discussing it with them on their forum I was able to show them that we can't actually access it yet.
While what I was telling them may not have been entirely correct (SteamDB is confusing), the main point I made was correct. Normal keys are not able to access the beta yet, but beta/developer keys can, as it's not currently set for Linux/Mac as a platform for us.
Human: Fall Flat is an open-ended physics puzzler with an optional local co-op mode, developed by No Brakes Games, and available now on Steam for Linux.
Controlling a party of adventurers, exploring dungeons and fighting weird magical creatures is an RPG tradition as old as the genre. Expect all that and more in this modern iteration of the classical dungeon crawler.
The European Commission announced on Wednesday that its IT engineers would provide a free security audit for the Apache HTTP Server and KeePass projects.
The EC selected the two projects following a public survey that took place between June 17 and July 8 and that received 3,282 answers.
The survey and security audit are part of the EU-FOSSA (EU-Free and Open Source Software Auditing) project, a test pilot program that received funding of €1 million until the end of the year.
While Microsoft would prefer you use its Edge browser on Windows 10 as part of its ecosystem, the most popular Windows browser is Google’s Chrome. But there is a downside to Chrome – spying and battery life.
It all started when Microsoft recently announced that its Edge browser used less battery power than Google Chrome, Mozilla Firefox or Opera on Windows 10 devices. It also measured telemetry – what the Windows 10 device was doing when using different browsers.
What it found was that the other browsers had a significantly higher central processing unit (CPU), and graphics processing unit (GPU) overhead when viewing the same Web pages. It also proved that using Edge resulted in 36-53% more battery life when performing the same tasks as the others.
Let’s not get into semantics about which search engine — Google or Bing — is better; this was about simple Web browsing, opening new tabs and watching videos. But it started a discussion as to why CPU and GPU usage was far higher. And it relates to spying and ad serving.
In December of 1967 the Silver Bridge collapsed into the Ohio River, killing 46 people. The cause was determined to be a single 2.5 millimeter defect in a single steel bar—some credit the Mothman for the disaster, but to most it was an avoidable engineering failure and a rebuttal to the design philosophy of substituting high-strength non-redundant building materials for lower-strength albeit layered and redundant materials. A partial failure is much better than a complete failure.
In 1996, Kocher co-authored the SSL v3.0 protocol, which would become the basis for the TLS standard. TLS is the difference between HTTP and HTTPS and is responsible for much of the security that allows for the modern internet. He argues that, barring some abrupt and unexpected advance in quantum computing or something yet unforeseen, TLS will continue to safeguard the web and do a very good job of it. What he's worried about is hardware: untested linkages in digital bridges.
A new report commissioned by the Department of Homeland Security forecasts that autonomous artificially intelligent robots are just five to 10 years away from hitting the mainstream—but there’s a catch.
The new breed of smart robots will be eminently hackable. To the point that they might be re-programmed to kill you.
The study, published in April, attempted to assess which emerging technology trends are most likely to go mainstream, while simultaneously posing serious “cybersecurity” problems.
The good news is that the near future is going to see some rapid, revolutionary changes that could dramatically enhance our lives. The bad news is that the technologies pitched to “become successful and transformative” in the next decade or so are extremely vulnerable to all sorts of back-door, front-door, and side-door compromises.
At issue is a fairly technical proposed standard called DMARC. Short for “domain-based messaging authentication reporting and conformance,” DMARC tries to solve a problem that has plagued email since its inception: It’s surprisingly difficult for email providers and end users alike to tell whether a given email is real – i.e. that it really was sent by the person or organization identified in the “from:” portion of the missive.
The US National Institute of Standards and Technology (NIST) has released the latest draft version of the Digital Authentication Guideline that contains language hinting at a future ban on SMS-based Two-Factor Authentication (2FA).
The Digital Authentication Guideline (DAG) is a set of rules used by software makers to build secure services, and by governments and private agencies to assess the security of their services and software.
NIST experts are constantly updating the guideline, in an effort to keep pace with the rapid change in the IT sector.
Details about 1.6 million users on the Clash of Kings online forum have been hacked, claims a breach notification site.
The user data from the popular mobile game's discussion forum were allegedly targeted by a hacker on 14 July.
Tech site ZDNet has reported the leaked data includes email addresses, IP addresses and usernames.
[Ed: vBulletin is proprietary software -- the same crap Canonical used for Ubuntu forums]