We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
MojoKid writes Intel recently released its latest generation of NUC small form factor systems, based on the company's new low-power Broadwell-U series processors. The primary advantages of Intel's 5th Generation Core Series Broadwell-U-based processors are better performance-per-watt, stronger integrated graphics, and a smaller footprint, all things that are perfectly suited to the company's NUC (Next Unit of Computing) products. The Intel NUC5i5RYK packs a Core i5-5250U processor with on-die Intel HD 6000 series graphics. The system also sports built-in 802.11ac Wi-Fi, Gigabit Ethernet, USB 3.0 and USB 2.0, M.2 SSD support, and a host of other features, all in a 115mm x 111mm x 32.7mm enclosure. Performance-wise the new 5th Gen Core Series-powered NUC benchmarks like a midrange notebook and is actually up for a bit of light-duty gaming, though it's probably more at home as a Home Theater PC, media streamer or kiosk desktop machine.
58 comments | yesterday
MojoKid writes: AMD just unveiled new details about their upcoming Carrizo APU architecture. The company is claiming the processor, which is still built on Global Foundries' 28nm 28SHP node like its predecessor, will nonetheless deliver big advances in both performance and efficiency. When it was first announced, AMD detailed support for next generation Radeon Graphics (DX12, Mantle, and Dual Graphics support), H.265 decoding, full HSA 1.0 support, and ARM Trustzone compatibility. But perhaps one of the biggest advantages of Carrizo is the fact that the APU and Southbridge are now incorporated into the same die; not just two separates dies built into and MCM package.
This not only improves performance, but also allows the Southbridge to take advantage of the 28SHP process rather than older, more power-hungry 45nm or 65nm process nodes. In addition, the Excavator cores used in Carrizo have switched from a High Performance Library (HPL) to a High Density Library (HDL) design. This allows for a reduction in the die area taken up by the processing cores (23 percent, according to AMD). This allows Carrizo to pack in 29 percent more transistors (3.1 billion versus 2.3 billion in Kaveri) in a die size that is only marginally larger (250mm2 for Carrizo versus 245mm2 for Kaveri). When all is said and done, AMD is claiming a 5 percent IPC boost for Carrizo and a 40 percent overall reduction in power usage.
108 comments | 2 days ago
According to this story at PC World, Nvidia was hit with a class action lawsuit Thursday that claims it misled customers about the capabilities of the GTX 970, which was released in September. Nvidia markets the chip as having 4GB of performance-boosting video RAM, but some users have complained the chip falters after using 3.5GB of that allocation. The lawsuit says the remaining half gigabyte runs 80 percent slower than it's supposed to. That can cause images to stutter on a high resolution screen and some games to perform poorly, the suit says. It was filed in the U.S. District Court for Northern California and names as defendants Nvidia and Giga-Byte Technology, which sells the GTX 970 in graphics cards. Nvidia declined to comment on the lawsuit Friday and Giga-Byte couldn't immediately be reached.
157 comments | 5 days ago
jones_supa writes: One week after NVIDIA disabled overclocking on their GeForce 900M mobility lineup, a representative of the company has reported that NVIDIA will be bringing back the disabled feature for their overclocking enthusiasts on the mobility front. On the GeForce Forums, he writes, "We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks. If you are eager to regain this capability right away, you can also revert back to 344.75."
32 comments | about a week ago
An anonymous reader writes Nvidia surprised members of the overclocking community this week when it pulled OC support from drivers for its 900M series mobile graphics cards. Many users (particularly those who bought laptops with higher-end cards like the 980m) were overclocking – until the latest driver update. Now, Nvidia is telling customers not to expect OC capabilities to return. “Unfortunately GeForce Notebooks were not designed to support overclocking,” wrote Nvidia’s Manuel Guzman. “Overclocking is by no means a trivial feature, and depends on thoughtful design of thermal, electrical, and other considerations. By overclocking a notebook, a user risks serious damage to the system that could result in non-functional systems, reduced notebook life, or many other effects.”
138 comments | about two weeks ago
jones_supa writes: The 1.7.0 release of Wayland is now available for download. The project thanks all who have contributed, and especially the desktop environments and client applications that now converse using Wayland. In an official announcement from Bryce Harrington of Samsung, he says the Wayland protocol may be considered 'done' but that doesn't mean there's not work to be done. A bigger importance is now given to testing, documentation, and bugfixing. As Wayland is maturing, we are also getting closer to the point where the big Linux distros will eventually start integrating it to their operating system.
188 comments | about two weeks ago
MojoKid writes: The VESA standards organization has published the eDP v1.4a specification (Embedded DisplayPort) that has some important new features for device manufacturers as they bump up mobile device displays into the 4K category and start looking towards even higher resolutions. eDP v1.4a will be able to support 8K displays, thanks to a segmented panel architecture known as Multi-SST Operation (MSO). A display with this architecture is broken into two or four segments, each of which supports HBR3 link rates of 8.1 Gbps. The updated eDP spec also includes VESA's Display Stream Compression (DSC) standard v1.1, which can improve battery life in mobile devices. In another effort to conserve battery power, VESA has tweaked its Panel Self Refresh (PSR) feature, which saves power by letting GPUs update portions of a display instead of the entire screen.
94 comments | about two weeks ago
MojoKid (1002251) writes "Dell recently launched their Android-based Venue 8 7000 slate, claiming it's the "world's thinnest" tablet. It measures a mere 6 millimeters thick, or 0.24 inches and change. That's 0.1mm slimmer than Apple's iPad Air 2 and 1.5mm flatter than the iPad mini 3, giving Dell full bragging rights, even if by a hair. Dell also opted for an Intel Atom Z3580 processor under the hood, clocked at up to 2.3GHz. This quad-core part is built on Intel's 22nm Moorefield microarchitecture. Compared to its Bay Trail predecessor, Moorefield comes in a smaller package with superior thermal attributes, as well as better graphics performance, courtesy of its PowerVR G6430 graphics core. The Venue 8 7000 also features one of the best 8-inch OLED displays on the market, with edge-to-edge glass and a 2560x1600 resolution. Finally, the Venue 8 7000 is also the first to integrate Intel's RealSense Snapshot Depth Camera, which offers interesting re-focusing and stereoscopic effects, with potentially other, more interesting use cases down the road. Performance-wise, the Venue 8 7000 is solid enough though not a speedster, putting out metrics in the benchmarks that place it in the middle of the pack of premium tablets on the market currently."
120 comments | about three weeks ago
An anonymous reader writes I buy massive collections of trading card games, Magic:The Gathering, Yu-Gi-Oh!, Pokemon, Weiss Schwarts, Cardfight Vanguard, etc, etc. And I've gotten the process fairly streamlined as far as price checking, grading, sorting, etc. Part of my process involves using higher-quality web cams positioned over the top of the cards which are in a stack. I keep a cam window on the screen to show a larger, brighter version of the card. What I'm wondering: Is there is an OCR solution out there that will look at the same spot on the screen, capture, ocr, dump to clipboard, etc.? I've tried several open source solutions but none of them quite fit my needs. What I'd really like is to be able to hit a hotkey, and have my clipboard populated with the textual data of the graphics in a pre-set x,y window range. All this should be done via a hotkey. I may be asking for a lot, but then again, I'm sure someone out there has had need of this type of set-up before. Anyone have any recommendations?
96 comments | about three weeks ago
jones_supa writes: A few weeks ago, an ASUS Nordic support representative inadvertently made available an interim build of the NVIDIA graphics driver. This was a mobile driver build (version 346.87) focused at ASUS G751 line of laptops. The driver was pulled shortly, but PC Perspective managed to get their hands on a copy of it, and installed it on a ASUS G751 review unit. To everyone's surprise, a 'G-SYNC display connected' system tray notification appeared. It turned out to actually be a functional NVIDIA G-SYNC setup on a laptop. PC Perspective found a 100Hz LCD panel inside, ran some tests, and also noted that G-SYNC is picky about the Tcon implementation of the LCD, which can lead to some glitches if not implemented minutely. NVIDIA confirmed that G-SYNC on mobile is coming in the near future, but the company wasn't able to yet discuss an official arrival date or technology specifics.
42 comments | about three weeks ago
50 comments | about three weeks ago
Bryce writes: Four years since the last major Inkscape release, now news is out about version 0.91 of this powerful vector drawing and painting tool. The main reason for the multi-year delay is that they've switched from their old custom rendering engine to using Cairo now, improving their support for open source standards. This release also adds symbol libraries and support for Visio stencils, cross platform WMF and EMF import and export, a native Windows 64-bit build, scads of bug fixes, and much more. Check out the full release notes for more information about what has changed, or just jump right to downloading your package for Windows, Linux, or Mac OS X.
134 comments | about three weeks ago
HughPickens.com writes Nick Summers has an interesting article at Bloomberg about the epidemic of 90 ATM bombings that has hit Britain since 2013. ATM machines are vulnerable because the strongbox inside an ATM has two essential holes: a small slot in front that spits out bills to customers and a big door in back through which employees load reams of cash in large cassettes. "Criminals have learned to see this simple enclosure as a physics problem," writes Summers. "Gas is pumped in, and when it's detonated, the weakest part—the large hinged door—is forced open. After an ATM blast, thieves force their way into the bank itself, where the now gaping rear of the cash machine is either exposed in the lobby or inside a trivially secured room. Set off with skill, the shock wave leaves the money neatly stacked, sometimes with a whiff of the distinctive acetylene odor of garlic." The rise in gas attacks has created a market opportunity for the companies that construct ATM components. Several manufacturers now make various anti-gas-attack modules: Some absorb shock waves, some detect gas and render it harmless, and some emit sound, fog, or dye to discourage thieves in the act.
As far as anyone knows, there has never been a gas attack on an American ATM. The leading theory points to the country's primitive ATM cards. Along with Mongolia, Papua New Guinea, and not many other countries, the U.S. doesn't require its plastic to contain an encryption chip, so stealing cards remains an effective, nonviolent way to get at the cash in an ATM. Encryption chip requirements are coming to the U.S. later this year, though. And given the gas raid's many advantages, it may be only a matter of time until the back of an American ATM comes rocketing off.
378 comments | about a month ago
Vigile writes Over the weekend NVIDIA sent out its first official response to the claims of hampered performance on the GTX 970 and a potential lack of access to 1/8th of the on-board memory. Today NVIDIA has clarified the situation again, this time with some important changes to the specifications of the GPU. First, the ROP count and L2 cache capacity of the GTX 970 were incorrectly reported at launch (last September). The GTX 970 has 52 ROPs and 1792 KB of L2 cache compared to the GTX 980 that has 64 ROPs and 2048 KB of L2 cache; previously both GPUs claimed to have identical specs. Because of this change, one of the 32-bit memory channels is accessed differently, forcing NVIDIA to create 3.5GB and 0.5GB pools of memory to improve overall performance for the majority of use cases. The smaller, 500MB pool operates at 1/7th the speed of the 3.5GB pool and thus will lower total graphics system performance by 4-6% when added into the memory system. That occurs when games request MORE than 3.5GB of memory allocation though, which happens only in extreme cases and combinations of resolution and anti-aliasing. Still, the jury is out on whether NVIDIA has answered enough questions to temper the fire from consumers.
113 comments | about 1 month ago
MojoKid writes After last Wednesday's Windows 10 event, early adopters and IT types were probably anxious for Microsoft to release the next preview build. Fortunately, it didn't take long as it came out on Friday, and it's safe to say that it introduced even more than many were anticipating (but still no Spartan browser). However, in case you missed it, DirectX 12 is actually enabled in this Windows 10 release, though unfortunately we'll need to wait for graphics drivers and apps that support it, to take advantage of DX 12 features and performance enhancements.
135 comments | about a month ago
Bram Stolk writes So, I am running GNU/Linux on a modern Haswell CPU, with an old Radeon HD5xxx from 2009. I'm pretty happy with the open source Gallium driver for 3D acceleration. But now I want to do some GPGPU development using OpenCL on this box, and the old GPU will no longer cut it. What do my fellow technophiles from Slashdot recommend as a replacement GPU? Go NVIDIA, go AMD, or just use the integrated Intel GPU instead? Bonus points for open sourced solutions. Performance not really important, but OpenCL driver maturity is.
110 comments | about a month ago
Vigile writes Over the past week or so, owners of the GeForce GTX 970 have found several instances where the GPU was unable or unwilling to address memory capacities over 3.5GB despite having 4GB of on-board frame buffer. Specific benchmarks were written to demonstrate the issue and users even found ways to configure games to utilize more than 3.5GB of memory using DSR and high levels of MSAA. While the GTX 980 can access 4GB of its memory, the GTX 970 appeared to be less likely to do so and would see a dramatic performance hit when it did. NVIDIA responded today saying that the GTX 970 has "fewer crossbar resources to the memory system" as a result of disabled groups of cores called SMMs. NVIDIA states that "to optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section" and that the GPU has "higher priority" to the larger pool. The question that remains is should this affect gamers' view of the GTX 970? If performance metrics already take the different memory configuration into account, then I don't see the GTX 970 declining in popularity.
145 comments | about a month ago
MojoKid writes NVIDIA is launching a new Maxwell desktop graphics card today, targeted at the sweet spot of the graphics card market ($200 or so), currently occupied by its previous gen GeForce GTX 760 and older GTX 660. The new GeForce GTX 960 features a brand new Maxwell-based GPU dubbed the GM206. NVIDIA was able to optimize the GM206's power efficiency without moving to a new process, by tweaking virtually every part of the GPU. NVIDIA's reference specifications for the GeForce GTX 960 call for a base clock of 1126MHz and a Boost clock of 1178MHz. The GPU is packing 1024 CUDA cores, 64 texture units, and 32 ROPs, which is half of what's inside their top-end GeForce GTX 980. The 2GB of GDDR5 memory on GeForce GTX 960 cards is clocked at a speedy 7GHz (effective GDDR5 data rate) over a 128-bit memory interface. The new GeForce GTX 960 is a low-power upgrade for gamers with GeForce GTX 660 class cards or older that make up a good percentage of the market now. It's usually faster than the previous generation GeForce GTX 760 card but, depending on the game title, can trail it as well, due to its narrower memory interface.
114 comments | about a month ago
MojoKid writes: Dell has been strategically setting-up their new Venue 8 7000 tablet for cameo appearances over the past few months, starting back at Intel Developer's Forum in September of last year, then again at Dell World in November and at CES 2015. What's interesting about this new device, in addition to Intel's RealSense camera is its Atom Z3580 quad-core processor, which is based on Intel's latest Moorefield architecture. Moorefield builds upon Intel's Cherrytrail Atom feature set and offers two additional CPU cores with up to a 2.3GHz clock speed, an enhanced PowerVR 6430 GPU and support of faster LPDDR3-1600 memory. Moorefield is also built for Intel's XMM 7260 LTE modem platform, which supports carrier aggregation. Overall, Moorefield looks solid, with performance ahead of a Snapdragon 801 but not quite able to catch the 805, NVIDIA Tegra K1 or Apple's A8X in terms of graphics throughput. On the CPU side, Intel's beefed-up quad-core Atom variant shows well.
22 comments | about a month ago
According to The Next Web, Emoji support has landed in the latest developer builds of Chrome for OS X, meaning that emoji can be seen on websites and be entered into text fields for the first time without issues. ... Users on Safari on OS X could already see emoji on the Web without issue, since Apple built that in. The bug in Chrome was fixed on December 11, which went into testing on Chrome’s Canary track recently. From there, we can expect it to move to the consumer version of Chrome in coming weeks.
104 comments | about a month and a half ago