ASRock DeskMini A300 Review: An Affordable DIY AMD Ryzen mini-PC

See the original posting on Anandtech

Small form-factor (SFF) machines have emerged as a major growth segment in the desktop PC market. Performance per watt is an important metric for such systems. Intel has pretty much been the only game in town for such computers, given that AMD platforms prior to the launch of Ryzen could barely compete on that metric. The NUC (UCFF) and mini-STX (5×5) were introduced by Intel as the standard motherboard sizes for the SFF market. We have previously seen AMD-based NUC-like platforms (namely, the Zotac ZBOX CA320 nano back in 2014), and earlier this year, ASRock became the first vendor to announce an AMD-based mini-STX system – the DeskMini A300. Read on to find out how the DeskMini A300 stacks up against other contemporary SFF PCs.

The Acer Predator Triton 500 Laptop Review: Going Thin with GeForce RTX 2080

See the original posting on Anandtech

Gaming laptops continue to be a bright spot in the PC market, and practically every manufacturer offers some sort of system targeted at gamers. Some of them more successfully target the market than others, offering features that improve gameplay and visuals, and others focus more on what I’ll politely call the “gaming laptop aesthetic” which includes a myriad of multi-colored LEDs, and generally angular design cues. Diving head-first into that subject, today we’re taking a look at Acer’s gaming-focused Predator Triton 500 laptop. Although Acer has touched on a couple of the aesthetic design choices, they’ve kept it subtle, and still offer all of the accoutrements expected in a premium gaming laptop design.

Hands On with the OPPO Reno 10x Zoom: 6.6-inch OLED with No Notch and Popup Selfie Camera

See the original posting on Anandtech

There are two distinct lines of development in the modern high-end smartphone space: the move to full screen devices, and the premium feature that makes a user go ‘wow’. For the new OPPO Reno series of smartphones, it succeeds on both fronts: it has a no-notch front display, and in order to provide that selfie camera, it has a motorized pop-up camera that comes out of the top. The question is if it is enough to draw users into buying the smartphone.

The Huawei P30 & P30 Pro Reviews: Photography Enhanced

See the original posting on Anandtech

The last year has been extremely exciting period for Huawei and its products: Starting with the P20, the company’s flagships have been truly transformative in terms of their camera photography capabilities. The P20 and P20 Pro last year were extremely intriguing products for the industry, as they ushered in the first step towards an ever more prevalent aspect of modern cameras: computational photography.

Huawei had pioneered the technique to bring new innovative use-cases such as the introduction of multi-frame combination mechanism for low-light capture (a.k.a. Night Mode), which really raised the bar and lead the way in terms of what we expect smartphone cameras be capable of in low light. Huawei didn’t only innovate in terms of software, but also using quite exotic hardware camera sensors, such as the 40MP units in the P20 Pro and the Mate 20 Pro.

This year, Huawei doubled down on the photography aspects of its predecessors with the introduction of the new P30 and P30 Pro. The two new flagships pick up where the P20s left off, and provide yet again a new set of generational improvements to the camera setups. This year, along with software optimisations, we yet again see big changes in the hardware of the cameras, with the introduction of an industry first RYYB 40MP main camera sensor, as well as the addition of an even more exotic 5x telephoto camera module that is enabled via a prism mirror and a 90° sensor layout.

Naturally, the P30 and P30 Pro also bring overall improvements and redesigns in the other aspects of what makes them flagship smartphones, with larger batteries, new screens, and overall design revamps.

Sony Teases Next-Gen PlayStation: Custom AMD Chip with Zen 2 CPU & Navi GPU, SSD Too

See the original posting on Anandtech

After years of speculation about what could be and what Sony may be up to, the company is finally starting to ramp up the long launch cycle for their next-generation PlayStation console. In an exclusive article published this morning via Wired, Sony games guru and system lead system architect Mark Cerny laid out a few tantalizing tidbits about the still unnamed console, offering some basic information on the underlying system architecture while promising that it’s “no mere upgrade.”

The focal point of Wired’s article is, as many AnandTech readers would expect, on the chip at the heart of the system. Cerny and Sony (and AMD) are now confirming that yes, AMD is once again putting the console’s central processor together. The cutting-edge chip will be built on an unnamed 7nm process, and will incorporate all of AMD’s latest Zen 2 CPU and Navi GPU technologies. And while neither Cerny nor AMD are going quite so far as to call it an APU – AMD’s favored name for a chip with CPU and GPU cores integrated – it’s clear that this is very much a single chip, and seemingly an APU in everything but name.

The Zotac ZBOX CI660 nano Fanless mini-PC Review: A Promising HTPC Platform

See the original posting on Anandtech

Zotac is a major player in the SFF PC space, and the emergence of the ultra-compact form-factor (UCFF) NUCs has broadened the available market for ther mini-PCs. The company markets their passively-cooled machines under the C-series moniker. Their C-series nano units adopt a form-factor very similar to Intel’s NUCs, providing performance and thermal efficiency commensurate with their size.

The company’s latest models, the CI6xx nano units, are based on the Kaby Lake-Refresh U-series SiPs, and they aim to improve on the older C-series units by adopting a larger form factor and adding more platform features. Today, we are taking a look at the ZBOX CI660 nano – Zotac’s flagship in the CI6xx lineup.

The GIGABYTE Z390 Aorus Pro WIFI Motherboard Review: A Sturdy $200 Surprise

See the original posting on Anandtech

On Intel’s desktop Z390 chipset, there are around 7 different ATX sized motherboards to choose from in the $180-200 price bracket. This not only puts pressure on manufacturers to deliver a high blend of premium features for a better price than the competition and use unique visuals like a peacock’s plume to entice users. GIGABYTE’s Z390 Aorus Pro WIFI is one with its $195 price tag. The Aorus brand is aimed squarely at gamers and the Z390 Aorus Pro WIFI looks to stake GIGABYTEs claim in a highly contested segment with a premium feature set at an affordable price.

AMD Launches 2nd Gen Ryzen Pro & Athlon Pro APUs

See the original posting on Anandtech

AMD on Monday introduced four new processors aimed at commercial laptops. The new AMD Ryzen Pro 3000-series and AMD Athlon Pro 300-series processors pack up to four x86 cores as well as AMD’s Radeon Vega integrated graphics. Because of improved power efficiency, AMD says that laptops powered by its latest Ryzen Pro APUs will work for up to 12 hours when used for general office workloads.

Intel’s Bean Canyon (NUC8i7BEH) Coffee Lake NUC Review – Ticking the Right Boxes

See the original posting on Anandtech

Intel’s NUCs have managed to develop a strong market for ultra-compact form-factor (UCFF) machines since they were introduced in the early 2010s. Each CPU generation has seen Intel put out stronger versions of the NUC (both in terms of performance and features) in a regular cadence. In parallel, we have seen experiments with slightly larger form-factors (such as the Skull Canyon and Hades Canyon NUCs). Today, we are looking at Intel’s flagship in their regular NUC category – the Core i7-based Bean Canyon (NUC8i7BEH).

Intel Launches the Xeon D-1600 Family: Upgrades to Xeon D-1500

See the original posting on Anandtech

Even if you’ve been keeping track of Intel’s Xeon family lines, the Xeon D family could probably give you cause for confusion. The same ‘generation’ of products spans a wide range of processors, from the dense ECC-enabled server all the way through to a big bustling cryptography and a network acceleration chip, when in actual fact each of these products is built on different internal microarchitectures. Today Intel is at it again, with the new Xeon D-1600 family.

The Intel Second Generation Xeon Scalable: Cascade Lake, Now with Up To 56-Cores and Optane!

See the original posting on Anandtech

The cadence of Intel’s enterprise processor portfolio is designed to support customers that use the hardware with a guarantee of socket and platform support for at least three years. As a result, we typically get two lots of processors per socket: Sandy Bridge and Ivy Bridge, Broadwell and Haswell, and now Cascade Lake joins Skylake. Intel’s new Second Generation Xeon Scalable (the official name) still comes in the new ‘Platinum / Gold / Silver / Bronze’ nomenclature, but this time offering up to 56 cores if you want the processor equivalent of Wolverine at your disposal. Not only is Intel offering more cores, but there’s Optane support, faster DRAM, new configurations, and better specialization that before. Intel also surprised us with better-than-expected hardware support for Spectre and Meltdown mitigations while still providing higher performance overall.

Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600

See the original posting on Anandtech

Today is the big day in 2019 for Intel’s Enterprise product announcements, combining some products that should be available from today and a few others set to be available in the next few months. Rather than go for a staggered approach, we have it all in one: processors, accelerators, networking, and edge compute. Here’s a quick run-down of what’s happening today, along with links to all of our deeper dive articles, our reviews, and announcement analysis.

The Samsung Galaxy S10+ Snapdragon & Exynos Review: Almost Perfect, Yet So Flawed

See the original posting on Anandtech

We’ve been in 2019 for a while. Although we’ve covered one or two smartphones in the last couple of months of the calendar year, the true “2019 flagship” phone season is really only starting now. Samsung’s Galaxy S10 is among the first releases in this new wave of phones, and for many markets it outright is the very first of a brand-new generation.

Samsung mixed things up this year by announcing the Galaxy S10 in San Francisco instead of the usual Mobile World Congress event. Though not unprecedented, the big reason here for the change in venues was to reflect Samsung’s close collaboration with US carriers such as Verizon on 5G and other matters. Indeed 5G has been pretty much the buzzword for the last year or more, and the last few months have been especially busy in this regard. To that end, there will be a 5G model of the S10, however with its limited availability it doesn’t have nearly the same mass-market appeal as the new mainstream variants of the Galaxy 10.

As we’re nearing this upcoming transition period in technology, the new Galaxy S10 models have instead needed to double-down on the fundamental aspects of the phones in order to entice consumers who are increasingly holding on to their smartphones for three years or more. Here the introduction of a new screen, powerful hardware, bigger batteries, as well as a brand new triple camera setup gives users quite a number of reasons to upgrade.

Today we’ll be reviewing the lead member of the Galaxy S10 family, the Galaxy S10+. And in true AnandTech tradition, we’re going to look at both variants of Samsung’s king of phones: the North American Snapdragon 855 model, as well as the European Exynos 9820 model. With Samsung using different SoCs for what are otherwise (nearly) identical phones, this gives us a unique opportunity to take an in-depth look at the two new processors and compare & contrast them under very similar circumstances. And of course, there’s a great deal to dig into with the Galaxy S10’s new screen and triple-module camera setup. This is going to be a long piece so prepare yourselves!

The Microsoft Surface Laptop 2 Review: Surface Essentials

See the original posting on Anandtech

Microsoft’s Surface lineup was created to bring a spark of innovation into the PC industry at a time where much of the competition was slow to change, and slow to adopt new form factors and new technologies. Microsoft’s Surface Pro lineup has undoubtedly been a huge success in this respect, with the 2-in-1s providing plenty of flexibility coupled with great hardware.

Unsurprisingly then, Microsoft has taken this success and run with it, growing the Surface brand by fleshing out the product line with more models. However even as Microsoft expanded the Surface family, they have always tried to keep that same edge – always embracing a unique feature on their lineup to differentiate a Surface device from the competition. Surface Pro had the kickstand, of course. Surface Book is a laptop with a detachable display. Surface Studio is an all-in-one PC that can fold into a drafting table.

But even with the Pro as a successful template for how to build out the Surface family, Microsoft has one product that doesn’t really fit in with the rest, and that is the Surface Laptop. There are no tricks or unique chassis features here. Microsoft just set out to create a thin and light laptop to fill a void where people want to buy a Surface, but want to use it in their lap; and they don’t need the performance, heft, or price of the Surface Book. It’s a simple concept for a company that’s been more focused on distinctive designs, and, as we’ll see, one that helps tap an important segment of the notebook market.

Huawei Launches the P30 and P30 Pro: P is for Photography

See the original posting on Anandtech

The latest update in Huawei’s P series smartphones are the new P30 and P30 Pro handsets, launched today in Paris. These new devices hope to push the reason why the P series exists – to go above and beyond when it comes to photography. The P30 and P30 Pro include Huawei’s new rear SuperSpectrum camera that allows more light into the sensor, and a front facing 32MP camera for better selfies.

Google Announces Stadia: A Game Streaming Service

See the original posting on Anandtech

Today at GDC, Google announced its new video game streaming service. The new service will be called Stadia. This builds on the information earlier this year that AMD was powering Project Stream (as was then called) with Radeon Pro GPUs, and Google is a primary partner using AMD’s next generation CPUs and GPUs.

Stadia is being advertised as the central community for gamers, creators, and developers. The idea is that people can play a wide array of games regardless of the hardware at hand. Back in October, Google debuted the technology showcasing a top-end AAA gaming title running at 60 FPS. Google wants a single place where gamers and YouTube creators can get together – no current gaming platform, according to Google, does this.

Ultimately Google wants to stream straight to the Google browser. Google worked with leading publishers and developers to help build the system infrastructure. Google is one of a few companies with enough content delivery networks around the world to ensure that frame rates are kept high with super low latency.

Users will be able to watch a video about a game, and instantly hit ‘Play Now’ and start playing the game in under five seconds without any download and lag. The idea is that a single code base can be enjoyed at any stream. At launch, desktop, laptop, TV, tablets, and phones will be supported. With Stadia, the datacenter is platform. No hardware acceleration is required on the device. The experience can be transferred between devices, such as chromebook to smartphone.

One of the highlights of Google’s demonstration of Stadia was the platform working on Google-enabled TVs.

The platform allows users to have any USB connected controller, or mouse and keyboard. Google will also be releasing its own Stadia Controller, available in three colors – white, black, and light blue. The controller connects via Wi-Fi straight into the cloud, and also which device is being run (it’s unclear how this works).

The controller has two new buttons. The first allows saving and sharing the experience out to YouTube. The second is Google Assistant, using the integrated microphone in the controller. This allows game developers to integrate Google Assistant into their games. It also allows users to ask Google when they need help in a game – and the assistant will look for a guide to help.

Stadia uses the same datacenter infrastructure already in place at Google. There are 7500+ edge nodes allows for compute resources being closer to players for lower latency. Custom designed, purpose built hardware powers the experience. Interconnected racks have sufficient compute and memory for the most demanding games. The technology has been in development inside Google for years.

At launch, resolutions will be supported up to 4K 60 fps with HDR and surround sound. Future  plans for up to 8K streaming at 120 fps are planned. The platform has been built to scale to support this. While playing, the stream is duplicated in 4K for direct upload – you get rendering quality video rather than what you capture locally.

The platform is instance based, so Google can scale when needed. Game developers no longer have to worry about building to a specific hardware performance – the datacenter can scale as required.

Custom GPU with AMD, with 10 TF of power, with a custom CPU with AVX2 support. Combined they create a single instance per person. Uses Linux and Vulkan, with full Unreal and Unity support. Havok engine support as well. Tool companies are onboard.

(When Google says custom CPU and custom GPU – this could be early hardware of AMD’s upcoming generations of technology, put into a custom core configuration / TDP. We’re likely looking at a Zen 2 based CPU, based on AVX2 support listed, and a Radeon Instinct based GPU with tweaked settings specifically for Google.)

One of the first games supported will be Doom Eternal from id Software, which will support 4K with HDR at 60 fps. Every user will get a single GPU with no other users.

UL Benchmarks (3DMark) has been working with Google to help benchmark the systems and measure the power of the infrastructure. Developers if required can use multiple GPUs, it appears.

Multiplayer is also supported, at least between different Stadia players. Distributed physics becomes possible, which means up to 1000 players in Battle Royale titles. There’s also the advantage, according to Google, of getting around hackers and cheaters.

Developers can support multi-platform multiplayer, and transfer save files between platforms. Game developers have already been working on MP demos with destructive environments using real-time rigid body physics, allowing for perfect synchronization.

Google also points out that split-screen gaming has not been a priority recently because of rendering two scenes at once. With Stadia, that problem disappears, as each player will be powered by a separate instance, reviving the idea of local co-op and squad based gaming. This also allows for multiple cameras for a single player to navigate a single map, for better tactics in certain types of games. Google says that this ability allows developers to create new types of games.