Eric Raymond Shares ‘Code Archaeology’ Tips, Urges Bug-Hunts in Ancient Code

See the original posting on Slashdot

Open source guru Eric Raymond warned about the possibility of security bugs in critical code which can now date back more than two decades — in a talk titled “Rescuing Ancient Code” at last week’s SouthEast Linux Fest in North Carolina. In a new interview with ITPro Today, Raymond offered this advice on the increasingly important art of “code archaeology”.
“Apply code validators as much as you can,” he said. “Static analysis, dynamic analysis, if you’re working in Python use Pylons, because every bug you find with those tools is a bug that you’re not going to have to bleed through your own eyeballs to find… It’s a good thing when you have a legacy code base to occasionally unleash somebody on it with a decent sense of architecture and say, ‘Here’s some money and some time; refactor it until it’s clean.’ Looks like a waste of money until you run into major systemic problems later because the code base got too crufty. You want to head that off….”
“Documentation is important,” he added, “applying all the validators you can is important, paying attention to architecture, paying attention to what’s clean is important, because dirty code attracts defects. Code that’s difficult to read, difficult to understand, that’s where the bugs are going to come out of apparent nowhere and mug you.”
For a final word of advice, Raymond suggested that it might be time to consider moving away from some legacy programming languages as well. “I’ve been a C programmer for 35 years and have written C++, though I don’t like it very much,” he said. “One of the things I think is happening right now is the dominance of that pair of languages is coming to an end. It’s time to start looking beyond those languages for systems programming. The reason is we’ve reached a project scale, we’ve reached a typical volume of code, at which the defect rates from the kind of manual memory management that you have to do in those languages are simply unacceptable anymore… think it’s time for working programmers and project managers to start thinking about, how about if we not do this in C and not incur those crazy downstream error rates.”
Raymond says he prefers Go for his alternative to C, complaining that Rust has a high entry barrier, partly because “the Rust people have not gotten their act together about a standard library.”

Read more of this story at Slashdot.

Verizon’s New Phone Plan Proves It Has No Idea What ‘Unlimited’ Actually Means

See the original posting on Slashdot

Verizon has unveiled its third “unlimited” smartphone plan that goes to show just how meaningless the term has become in the U.S. wireless industry. “In addition to its Go Unlimited and Beyond Unlimited plans, Verizon is now adding a premium Above Unlimited plan to the mix, which offers 75GB of ‘unlimited’ data per month (as opposed to the 22GB of ‘unlimited’ data you get on less expensive plans), along with 20GB of ‘unlimited’ data when using your phone as a hotspot, 500GB of Verizon cloud storage, and five monthly international Travel Passes, which are daily vouchers that let you use your phone’s wireless service abroad the same as if you were in the U.S.,” reports Gizmodo. Are you confused yet? From the report: And as if that wasn’t bad enough, Verizon has also updated its convoluted sliding pricing scheme that adjusts based on how many phones are on a single bill. For families with four lines of service, the Above Unlimited cost $60 per person, but if you’re a single user the same service costs $95, which really seems like bullshit because if everything is supposed to be unlimited, it shouldn’t really make a difference how many people are on the same bill. As a small concession to flexibility, Verizon says families with multiple lines can now mix and match plans instead of having to choose a single plan for every line, which should allow families to choose the right service for an individual person’s needs and help keep costs down. The new Above Unlimited plan and the company’s mix-and-match feature arrives next week on June 18th.

Read more of this story at Slashdot.

Samsung Unveils Chromebook Plus V2

See the original posting on Slashdot

Brian Fagioli, writing for BetaNews: Samsung announces its latest such laptop — the premium, yet affordable, Chromebook Plus (V2). This is a refresh of the first-gen “Plus” model. It can run Android apps and doubles as a convertible tablet, making it very versatile. Best of all, you won’t have to wait long to get it — it will go on sale very soon. “The Samsung Chromebook Plus (V2) puts productivity and entertainment at consumers’ fingertips and at the tip of the built-in pen. At 2.91 pounds, its thin design makes it easy to slip into a bag and carry all day — or use throughout the day with its extended battery life. Flipping its 12.2-inch FHD 1920×1080 resolution screen transforms it from notebook to tablet to sketchbook — and back — with two cameras for making it easier to stay connected with friends and sharing with the world. Plus, Chrome OS helps users get more done by providing access to millions of Android apps on Google Play,” says Samsung. The Chromebook Plus, powered by Intel Celeron Processor 3965Y and 4GB of RAM, goes on sale later this month at $499.

Read more of this story at Slashdot.

Carmel, Libra, and Andromeda Are the Next Wave of Surface Devices: Report

See the original posting on Slashdot

Brad Sams, writing for Thurrott blog: To help grow to the footprint of the brand, Microsoft is working on updates to its existing products as well as a couple new offerings. I was able to view a few pieces of internal documents that outlined some of the future plans of the Surface brand that identify previously unknown codenames for upcoming products. The Surface Pro 6 is internally known as Carmel, the upcoming low-cost Surface Tablet is going by the name of Libra, and then, of course, there is the Andromeda device that we have been talking about for many months. The Libra tablet is likely the device that Bloomberg reported about earlier this year; a low-cost Surface tablet slated for 2018. The Surface Pro 6 (Carmel) does not list a shipping date and considering that Microsoft only recently released the LTE variant of the Surface Pro 5, this product may not arrive as soon as many have hoped. That being said, a refresh of the product is in the pipeline and actively being developed. And then there is Andromeda; here is where this gets a bit more interesting. According to the documentation, the device is scheduled to be released in 2018. Microsoft thinks of this hardware as a pocketable device to create a truly personal and versatile computing experience.

Read more of this story at Slashdot.

Four Years On, Developers Ponder The Real Purpose of Apple’s Swift Programming Language

See the original posting on Slashdot

Programming languages such as Lua, Objective-C, Erlang, and Ruby (on Rails) offer distinct features, but they are also riddled with certain well-documented drawbacks. However, writes respected critic Dominik Wagner, their origination and continued existence serves a purpose. In 2014, Apple introduced Swift programming language. It has been four years, but Wagner and many developers who have shared the blog post over the weekend, wonder what exactly is Swift trying to solve as they capture the struggle at least a portion of developers who are writing in Swift face today. Writes Wagner: Swift just wanted to be better, more modern, the future — the one language to rule them all. A first red flag for anyone who ever tried to do a 2.0 rewrite of anything. On top of that it chose to be opinionated about features of Objective-C, that many long time developers consider virtues, not problems: Adding compile time static dispatch, and making dynamic dispatch and message passing a second class citizen and introspection a non-feature. Define the convenience and elegance of nil-message passing only as a source of problems. Classify the implicit optionality of objects purely as a source of bugs. […] It keeps defering the big wins to the future while it only offered a very labour intensive upgrade path. Without a steady revenue stream, many apps that would have just compiled fine if done in Objective-C, either can’t take advantage of new features of the devices easily, or had to be taken out of the App Store alltogether, because upgrading would be to costly. If you are working in the indie dev-scene, you probably know one of those stories as well. And while this is supposed to be over now, this damage has been done and is real. On top of all of this, there is that great tension with the existing Apple framework ecosystem. While Apple did a great job on exposing Cocoa/Foundation as graspable into Swift as they could, there is still great tension in the way Swift wants to see the world, and the design paradigms that created the existing frameworks. That tension is not resolved yet, and since it is a design conflict, essentially can’t be resolved. Just mitigated. From old foundational design patterns of Cocoa, like delegation, data sources, flat class hierarchies, over to the way the collection classes work, and how forgiving the API in general should be. If you work in that world you are constantly torn between doing things the Swift/standard-library way, or the Cocoa way and bridging in-between. To make matters worse there are a lot of concepts that don’t even have a good equivalent. This, for me at least, generates an almost unbearable mental load.

Read more of this story at Slashdot.

tvOS 12 Brings Dolby Atmos Support, Zero Sign-In, and TV App Improvements

See the original posting on Slashdot

If you’re using an Apple TV as your main streaming box, you will be happy to know several big improvements are coming to the platform. Macworld reports of what’s new in tvOS 12: With tvOS 12, Dolby Atmos comes to the Apple TV 4K. All you need for full 3D immersive audio is an Atmos-supporting sound bar or receiver. This makes Apple TV 4K the only streaming media box to be certified for both Dolby Vision and Dolby Atmos.

One of the best features of tvOS 11 is called Single Sign-on. You add your TV provider’s login information to your Apple TV device. If an app supports Single Sign-on, you can log in with your TV provider with just a few taps. It’s a big step forward, but still a little bit of a pain. With tvOS 12, Apple makes the whole process totally seamless with Zero Sign-on. Here’s how it works: If your TV provider is your Internet provider (a very common occurrence here in the United States), and your Apple TV is connected to the Internet through that provider, you sign in automatically to any Apple TV app your provider gives you access to. Just launch the app, and you’re signed in, no passwords or configuration needed at all.

Apple’s breathtaking 4K video screensavers, called “Aerials,” is one of those minor delights that Apple TV 4K users can’t get enough of. In tvOS 12, they get better. You can tap the remote to see the location at which the Aerial was filmed. A new set of Aerials is the star of the show, however. Called “Earth,” these are stunning videos from space, taken by astronauts at the International Space Station. Furthermore, the TV app will provide live content from select TV providers; Charter Spectrum will support the app with live channels and content later this year. Apple is also now allowing third-party home control systems’ remotes to control your Apple TV (including Siri).

Read more of this story at Slashdot.

Google’s Lens AI Camera Is Now a Standalone App

See the original posting on Slashdot

Google Lens is now available as an app in the Play Store for devices with Android Marshmallow and above. The app is designed to bring up relevant information using visual analysis. Android Police reports: When you open the app, it goes right into a live viewfinder with Lens looking for things it can ID. Like the Assistant version of Lens, you can tap on items to get more information (assuming Google can figure out what they are) and copy text from documents. However, I’ve noticed that copying text doesn’t work on the OnePlus 6 right now. It works fine with the built-in Lens version. Some users are reporting that it’s not working properly on some devices, so keep that mind if you decide to give it a whirl.

Read more of this story at Slashdot.

Atari Launches Linux Gaming Box Starting at $199

See the original posting on Slashdot

An anonymous reader quotes Linux.com:
Attempts to establish Linux as a gaming platform have failed time and time again, with Valve’s SteamOS being the latest high-profile casualty. Yet, Linux has emerged as a significant platform in the much smaller niche of retro gaming, especially on the Raspberry Pi. Atari has now re-emerged from the fog of gaming history with an Ubuntu-based Atari VCS gaming and media streaming console aimed at retro gamers. In addition to games, the Atari VCS will also offer Internet access and optional voice control. With a Bluetooth keyboard and mouse, the system can be used as a standard Linux computer.
The catch is that the already delayed systems won’t ship until July 2019… By the launch date, Atari plans to have “new and exclusive” games for download or streaming, including “reimagined classic titles from Atari and other top developers,” as well as multi-player games. The Atari VCS Store will also offer video, music and other content… The hardware is not open source, and the games will be protected with HDCP. However, the Ubuntu Linux stack based on Linux kernel 4.10 is open source, and includes a “customizable Linux UX.” A Linux “sandbox” will be available for developing or porting games and apps. Developers can build games using any Linux compatible gaming engine, including Unity, Unreal Engine, and Gamemaker. Atari also says that “Linux-based games from Steam and other platforms that meet Atari VCS hardware specifications should work.”
Atari boasts this will be their first device offering online multi-player experiences, and the device will also come pre-loaded with over 100 classic Atari games.
An Indiegogo campaign this week seeking $100,000 in pre-orders has already raised over $2.2 million from 8808 backers.

Read more of this story at Slashdot.

DeepMind Used YouTube Videos To Train Game-Beating Atari Bot

See the original posting on Slashdot

Artem Tashkinov shares a report from The Register: DeepMind has taught artificially intelligent programs to play classic Atari computer games by making them watch YouTube videos. Exploration games like 1984’s Montezuma’s Revenge are particularly difficult for AI to crack, because it’s not obvious where you should go, which items you need and in which order, and where you should use them. That makes defining rewards difficult without spelling out exactly how to play the thing, and thus defeating the point of the exercise. For example, Montezuma’s Revenge requires the agent to direct a cowboy-hat-wearing character, known as Panama Joe, through a series of rooms and scenarios to reach a treasure chamber in a temple, where all the goodies are hidden. Pocketing a golden key, your first crucial item, takes about 100 steps, and is equivalent to 100^18 possible action sequences.

To educate their code, the researchers chose three YouTube gameplay videos for each of the three titles: Montezuma’s Revenge, Pitfall, and Private Eye. Each game had its own agent, which had to map the actions and features of the title into a form it could understand. The team used two methods: temporal distance classification (TDC), and cross-modal temporal distance classification (CDC). The DeepMind code still relies on lots of small rewards, of a kind, although they are referred to as checkpoints. While playing the game, every sixteenth video frame of the agent’s session is taken as a snapshot and compared to a frame in a fourth video of a human playing the same game. If the agent’s game frame is close or matches the one in the human’s video, it is rewarded. Over time, it imitates the way the game is played in the videos by carrying out a similar sequence of moves to match the checkpoint frame. In the end, the agent was able to exceed average human players and other RL algorithms: Rainbow, ApeX, and DQfD. The researchers documented their method in a paper this week. You can view the agent in action here.

Read more of this story at Slashdot.

HoloLens Can Act As Eyes For Blind Users and Guide Them With Audio Prompts, New Research Shows

See the original posting on Slashdot

New research shows that Microsoft’s HoloLens augmented-reality headset works well as a visual prosthesis for the vision impaired, not relaying actual visual data but guiding them in real time with audio cues and instructions. TechCrunch reports: The researchers, from Caltech and University of Southern California, first argue that restoring vision is at present simply not a realistic goal, but that replacing the perception portion of vision isn’t necessary to replicate the practical portion. After all, if you can tell where a chair is, you don’t need to see it to avoid it, right? Crunching visual data and producing a map of high-level features like walls, obstacles and doors is one of the core capabilities of the HoloLens, so the team decided to let it do its thing and recreate the environment for the user from these extracted features. They designed the system around sound, naturally. Every major object and feature can tell the user where it is, either via voice or sound. Walls, for instance, hiss (presumably a white noise, not a snake hiss) as the user approaches them. And the user can scan the scene, with objects announcing themselves from left to right from the direction in which they are located. A single object can be selected and will repeat its callout to help the user find it. That’s all well for stationary tasks like finding your cane or the couch in a friend’s house. But the system also works in motion.

The team recruited seven blind people to test it out. They were given a brief intro but no training, and then asked to accomplish a variety of tasks. The users could reliably locate and point to objects from audio cues, and were able to find a chair in a room in a fraction of the time they normally would, and avoid obstacles easily as well. Then they were tasked with navigating from the entrance of a building to a room on the second floor by following the headset’s instructions. A “virtual guide” repeatedly says “follow me” from an apparent distance of a few feet ahead, while also warning when stairs were coming, where handrails were and when the user had gone off course. All seven users got to their destinations on the first try, and much more quickly than if they had had to proceed normally with no navigation.

Read more of this story at Slashdot.

Intellivision Lives: Tommy Tallarico Will Relaunch 1980s Console

See the original posting on Slashdot

craters writes: A wave of nostalgia has hit gamers, with Nintendo and Atari taking advantage with launches, both recent and pending, of older game consoles. Now they’ll have a new competitor with Intellivision Entertainment. Originally released in 1980, the Intellivision console and its successors sold millions of units over three decades. The new Intellivision system (name TBA) will carry on the company tradition of “firsts” with its new concept, design and approach to gaming. The original Intellivision system generated many “firsts” in the video game industry including the first 16-bit gaming machine, the first gaming console to offer digital distribution, the first to bring speech/voice to games, the first to license professional sports leagues and organizations and the first to be a dedicated game console and home computer.

Read more of this story at Slashdot.

How Canada Ended Up As An AI Superpower

See the original posting on Slashdot

pacopico writes: Neural nets and deep learning are all the rage these days, but their rise was anything but sudden. A handful of determined researchers scattered around the globe spent decades developing neural nets while most of their peers thought they were mad. An unusually large number of these academics — including Geoff Hinton, Yoshua Bengio, Yann LeCun and Richard Sutton — were working at universities in Canada. Bloomberg Businessweek has put together an oral history of how Canada brought them all together, why they kept chasing neural nets in the face of so much failure, and why their ideas suddenly started to take off. There’s also a documentary featuring the researchers and Prime Minster Justin Trudeau that tells more of the story and looks at where AI technology is heading — both the good and the bad. Overall, it’s a solid primer for people wanting to know about AI and the weird story of where the technology came from, but might be kinda basic for hardcore AI folks.

Read more of this story at Slashdot.

A Middle-Aged Writer’s Quest To Start Learning To Code For the First Time

See the original posting on Slashdot

OpenSourceAllTheWay writes: The Economist’s 1843 magazine details one middle-aged writer’s (Andrew Smith) quest to learn to code for the first time, after becoming interested in the “alien” logic mechanisms that power completely new phenomena like crypto-currency and effectively make the modern world function in the 21st Century. The writer discovers that there are over 1,700 actively used computer programming languages to choose from, and that every programmer that he asks “Where should someone like me start with coding?” contradicts the next in his or her recommendation. One seasoned programmer tells him that programmers discussing what language is best is the equivalent of watching “religious wars.” The writer is stunned by how many of these languages were created by unpaid individuals who often built them for “glory and the hell of it.” He is also amazed by how many people help each other with coding problems on the internet every day, and the computer programmer culture that non-technical people are oblivious of. Eventually the writer finds a chart of the most popular programming languages online, and discovers that these are Python, Javascript, and C++. The syntax of each of these languages looks indecipherable to him. The writer, with some help from online tutorials, then learns how to write a basic Python program that looks for keywords in a Twitter feed. The article is interesting in that it shows what the “alien world of coding” looks like to people who are not already computer nerds and in fact know very little about how computer software works. There are many interesting observations on coding/computing culture in the article, seen through the lens of someone who is not a computer nerd and who has not spent the last two decades hanging out on Slashdot or Stackoverflow.

Read more of this story at Slashdot.

Google and LG Unveil World’s Highest-Resolution OLED On-Glass VR Display

See the original posting on Slashdot

A couple months ago, Road to VR reported that Google and LG were planning to reveal the “world’s highest-resolution OLED on-glass display” for virtual-reality headsets on May 22nd. Well, that day has arrived and the two companies unveiled that very display. Android Authority reports: As expected, the 4.3-inch OLED 18MP display has a resolution of 4,800 x 3,840. The display has a pixel density of 1,443PPI and a 120Hz refresh rate. Google and LG referred to it as the “world’s highest-resolution OLED on-glass display.” For comparison’s sake, the HTC Vive has two 3.6-inch displays with resolutions of 1,200 x 1,080. The higher-end HTC Vive Pro has two 3.5-inch displays with resolutions of 1,600 x 1,440. The Vive Pro maxes out at 615PPI, making this new LG panel about 57% better than HTC’s best offering. However, there’s already one display that’s better than anything on offer, and that’s your own vision. A person with great vision sees in an estimated resolution of 9,600 x 9,000 with a PPI density of 2,183. In other words, this new display from Google and LG is about half as good as our own eyes. Unfortunately, there are no plans to use them in any consumer products yet. Google rep Carlin Verri told 9to5Google that the companies started this project to push the industry forward.

Read more of this story at Slashdot.

Razer Slims Down Blade, Debuts MacOS-Compatible eGPU Enclosure

See the original posting on Slashdot

An anonymous reader quotes a report from Ars Technica: Today, Razer debuted big updates to its Razer Blade laptop, focusing on design and performance to usher the gaming notebook into 2018. While the new Blade still looks unmistakably “Razer,” its design has changed dramatically for the better. Razer upped the screen size from 14 inches to 15.6 inches, reducing the surrounding bezels to just 4.9mm so that the device fits in with the other nearly bezel-less ultrabooks popular today. Razer is offering 1080p 60Hz or 144Hz panels, along with a 4K touchscreen option as well. The larger display panel makes the laptop slightly heavier than its predecessor, and it’s a bit wider overall, too (4.7 pounds and 9.3 inches, respectively). However, the slimmer bezels, sharper edges, and aluminum unibody make the new Razer Blade look like a clear upgrade from the previous model.

Another new addition to the Razer lineup is the Core X, a Thunderbolt 3 external graphics enclosure with space for large, three-slot wide graphics cards. The Core X joins the Core V2 graphics enclosure as one of Razer’s solutions for gamers who want to add desktop-like graphics power to their laptops — and it’s more affordable than the V2 as well. While it’s a bit stockier than Razer’s existing enclosure, the Core X has an aluminum body with open vents to properly handle heat, regardless of the task at hand. The Core X connects to a compatible notebook through one Thunderbolt 3 port, providing eGPU access and 100W of power thanks to its 650 ATX power supply. It’s both cheaper and seemingly easier to use than the V2, but that comes with some compromises: the Core X doesn’t have Chroma lighting, and it lacks USB and Ethernet ports. Some other specs of the new Blade include a Intel Core i7-8750H processor, Nvidia GTX 1060 or 1070 with Max-Q graphics, up to 32GB of RAM, up to 2TB of PCIe-based SSD, and 80Whr battery. There are three USB-A 3.1 ports, one proprietary charging port, one Thunderbolt 3 port, a Mini DisplayPort, and an HDMI port.

Read more of this story at Slashdot.

German Test Reveals That Magnetic Fields Are Pushing the EM Drive

See the original posting on Slashdot

“Researchers in Germany have performed an independent, controlled test of the infamous EM Drive with an unprecedented level of precision,” writes PvtVoid. “The result? The thrust is coming from interactions with the Earth’s magnetic field.” From the report: Instead of getting ahold of someone else’s EM drive, or Mach-effect device, the researchers created their own, along with the driving electronics. The researchers used precision machining and polishing to obtain a microwave cavity that was much better than those previously published. If anything was going to work, this would be the one. The researchers built up a very nice driving circuit that was capable of supplying 50W of power to the cavity. However, the amplifier mountings still needed to be worked on. So, to keep thermal management problems under control, they limited themselves to a couple of Watts in the current tests. The researchers also inserted an enormous attenuator. This meant that they could, without physically changing the setup, switch on all the electronics and have the amplifiers working at full noise, and all the power would either go to the EM drive or be absorbed in the attenuator. That gives them much more freedom to determine if the thrust was coming from the drive or not.

Even with a power of just a couple of Watts, the EM-drive generates thrust in the expected direction (e.g., the torsion bar twists in the right direction). If you reverse the direction of the thruster, the balance swings back the other way: the thrust is reversed. Unfortunately, the EM drive also generates the thrust when the thruster is directed so that it cannot produce a torque on the balance (e.g., the null test also produces thrust). And likewise, that “thrust” reverses when you reverse the direction of the thruster. The best part is that the results are the same when the attenuator is put into the circuit. In this case, there is basically no radiation in the microwave cavity, yet the WTF-thruster thrusts on. So, where does the force come from? The Earth’s magnetic field, most likely. The cables that carry the current to the microwave amplifier run along the arm of the torsion bar. Although the cable is shielded, it is not perfect (because the researchers did not have enough mu metal). The current in the cable experiences a force due to the Earth’s magnetic field that is precisely perpendicular to the torsion bar. And, depending on the orientation of the thruster, the direction of the current will reverse and the force will reverse. The researchers’ conclude by saying: “At least, SpaceDrive [the name of the test setup] is an excellent educational project by developing highly demanding test setups, evaluating theoretical models and possible experimental errors. It’s a great learning experience with the possibility to find something that can drive space exploration into its next generation.”

Read more of this story at Slashdot.

The Verge Goes Hands-On With the ‘Wildly Ambitious’ RED Hydrogen One Smartphone

See the original posting on Slashdot

It’s been almost a year since RED, a company known for its high-end $10,000+ cameras, teased a smartphone called the RED Hydrogen One. Several months have passed since the phone was announced and we still don’t know much about it, aside from it having a very industrial design and “Hydrogen holographic display.” Earlier this week, AT&T and Verizon confirmed that they’ll launch the device later this year. Now, The Verge’s Dieter Bohn has shared his hands-on impressions with the device, which he claims to be “one of the most ambitious smartphones in years from a company not named Apple, Google, or Samsung.” Here’s an excerpt from the report: The company better known for high-end 4K cameras with names like “Weapon” and “Epic-w” isn’t entering the smartphone game simply to sell you a better Android phone. No, this phone is meant to be one piece of a modular system of cameras and other media creation equipment — the company claims it will be “the foundation of a future multi-dimensional media system.” To that end, it has a big set of pogo-pins on the back to connect it to RED’s other cameras also to allow users to attach (forthcoming) modules to it, including lens mounts. If it were just a modular smartphone, we’d be talking about whether we really expected the company to produce enough modules to support it.

RED is planning on starting with a module that is essentially a huge camera sensor — the company is not ready to give exact details, but the plan is definitely more towards DSLR size than smartphone size. Then, according to CEO Jim Jannard, the company wants any traditional big camera lens to be attached to it. Answering a fan question, he joked that support for lenses will be “pretty limited,” working “just” with Fuji, Canon, Nikon, Leica, and more. […] The processor inside will be a slightly-out-of-date Qualcomm Snapdragon 835, but it seemed fast enough in the few demos I was able to try. Honestly, though, if you’re looking to get this thing just as a phone, you’re probably making your decision based on the wrong metrics. It’s probably going to be a perfectly capable phone, but at this price (starting at $1,195) what you’re buying into is the module ecosystem.

Read more of this story at Slashdot.

Christopher Nolan Returns Kubrick Sci-Fi Masterpiece ‘2001: A Space Odyssey’ To Its Original Glory

See the original posting on Slashdot

LA Times’ Kenneth Turan traces Christopher Nolan’s meticulous restoration of Kubrick’s masterpiece to its 70-mm glory: Christopher Nolan wants to show me something interesting. Something beautiful and exceptional, something that changed his life when he was a boy. It’s also something that Nolan, one of the most accomplished and successful of contemporary filmmakers, has persuaded Warner Bros. to share with the world both at the upcoming Cannes Film Festival and then in theaters nationwide, but in a way that boldly deviates from standard practice. For what is being cued up in a small, hidden-away screening room in an unmarked building in Burbank is a brand new 70-mm reel of film of one of the most significant and influential motion pictures ever made, Stanley Kubrick’s 1968 science-fiction epic “2001: A Space Odyssey.” Yes, you read that right. Not a digital anything, an actual reel of film that was for all intents and purposes identical to the one Nolan saw as a child and Kubrick himself would have looked at when the film was new half a century ago.

Read more of this story at Slashdot.

Design Commentary on Google’s New To-Do Tasks App

See the original posting on Slashdot

On the sidelines of Gmail’s big refresh push, Google also released a new app called Google Tasks. It’s a simple app that aims to help users manage their work and home tasks. But it’s being talked about for one more reason. From a blog post: Unlike most of their other apps, though, Tasks uses an inconsistent mix of Roboto, their old brand typeface, and Product Sans, their new one. The two faces don’t look good together — it’s like when Apple shipped apps that used both Helvetica and Lucida Grande. According to their announcement of Product Sans and their new logo, the typeface was supposed to be used in promotional materials and lockups, but there’s no mention of it being used for product UIs. In fact, the only other product I can find that has this same inconsistent mix is the new Gmail.com, also previewed today. It isn’t just about what these typefaces look like, either, but how they’re used. For example, when entering a new task, the name of the task is set in Product Sans; when it is added to the list, it becomes Roboto. Tapping on the task takes you to a details view where, now, the name of the task is in Product Sans. There are three options to add more information: if you want to add details, you’ll do it in Roboto, but adding a due date will be in Product Sans. The “add subtasks” button — well, text in the same grey as everything else except other buttons that are blue — is set in Product Sans, but the tasks are set in Roboto.

Read more of this story at Slashdot.

Slashdot Asks: How Do You Like the New Gmail UI?

See the original posting on Slashdot

Earlier today, Google pushed out the biggest revamp of Gmail in years. In addition to a new material design look, there are quick links to other Google services, such as Calendar, Tasks, and Keep, as well as a new “confidential mode” designed to protect users against certain attacks by having the email(s) automatically expire at a time of the sender’s choosing. Long-time Slashdot reader Lauren Weinstein shares their initial impressions of Google’s new Gmail UI: Google launched general access to their first significant Gmail user interface (UI) redesign in many years today. It’s rolling out gradually — when it hits your account you’ll see a “Try the new Gmail” choice under the settings (“gear”) icon on the upper right of the page (you can also revert to the “classic” interface for now, via the same menu). But you probably won’t need to revert. Google clearly didn’t want to screw up Gmail, and my initial impression is that they’ve succeeded by avoiding radical changes in the UI. I’ll bet that some casual Gmail users might not even immediately notice the differences.

The new Gmail UI is what we could call a “minimally disruptive” redesign of the now “classic” version. The overall design is not altered in major respects. So far I haven’t found any notable missing features, options, or settings. My impression is that the back end systems serving Gmail are largely unchanged. Additionally, there are a number of new features (some of which are familiar in design from Google’s “Inbox” email interface) that are now surfaced for the new Gmail. Crucially, overall readability and usability (including contrast, font choices, UI selection elements, etc.) seem so close to classic Gmail (at least in my limited testing so far) as to make any differences essentially inconsequential. And it’s still possible to select a dark theme from settings if you wish, which results in even higher contrast. Have you tried the new Gmail? If so, how do you like the new interface?

Read more of this story at Slashdot.

1 2 3 133