Happyspin https://happyspin.me/ Wed, 14 Aug 2024 05:30:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Northern Taiwan power outage causes small impact on Micron and Nanya fabs https://happyspin.me/northern-taiwan-power-outage-causes-small-impact-on-micron-and-nanya-fabs/ https://happyspin.me/northern-taiwan-power-outage-causes-small-impact-on-micron-and-nanya-fabs/#respond Wed, 14 Aug 2024 05:30:16 +0000 https://happyspin.me/?p=72383

An unexpected power interruption hit the areas of Linkou, Taishan, and Xinzhuang in New Taipei City in Northern Taiwan, and a few chipmakers with factories in these areas were noted to have been affected. According to a report by TechNews (machine translated), “Micron’s Taoyuan and Taichung factories have dropped voltage due to the lightning strike on August 13, and Micron confirms that all colleagues are safe and operating as usual.”

Factories of memory maker Micron and its partner, Nanya Technology, experienced some form of interruption in their manufacturing process. However, there were also rumors that Micron’s dry etching and wet processes may have been affected by the voltage drop.

On the other hand, Nanya Technology suffered from a 20-minute power loss. Even though the factory’s uninterruptible power supply (UPS) kicked in for most of its crucial components, like the lithography and etching areas, company management is still checking if other systems that a UPS didn’t protect received some damage.

Overall, we expect the impact of the power outage to be minor. The affected companies still have inventory in each production line, and supply is still deemed sustainable. However, we cannot discount the global impact of a disruption at this level, especially as Micron and Nanya are some of the biggest global chip suppliers.

For example, a Micron fab suffered a one-hour power outage in 2020, and the market responded with an increase in DRAM prices. A disruption happened again a few days later when a 6.7-magnitude earthquake hit the area, reducing graphics DRAM output and causing prices to spike again.

These events show us how fragile our semiconductor supply chain is. With most of the most advanced chips produced in vulnerable hot spots like Taiwan and South Korea, flare-ups in the area could result in a global chip winter. Even the U.S. recognizes this, saying that a Chinese seizure of TSMC would devastate the American economy.

While the U.S. is investing billions of dollars via the CHIPS Act to jumpstart homegrown semiconductor research, development, and production, experts say that TSMC will still lead global semiconductor manufacturing until 2032. So, until we reach a time when the manufacture of leading-edge chips is spread out across the world, the global digital ecosystem is at risk from any major event in the area, whether natural or human-made.

]]>
https://happyspin.me/northern-taiwan-power-outage-causes-small-impact-on-micron-and-nanya-fabs/feed/ 0
iPhone Mirroring now lets you rearrange your Home Screen with ‘jiggle mode’ in iOS 18 and macOS Sequoia https://happyspin.me/iphone-mirroring-now-lets-you-rearrange-your-home-screen-with-jiggle-mode-in-ios-18-and-macos-sequoia/ https://happyspin.me/iphone-mirroring-now-lets-you-rearrange-your-home-screen-with-jiggle-mode-in-ios-18-and-macos-sequoia/#respond Wed, 14 Aug 2024 05:29:36 +0000 https://happyspin.me/?p=72380

iPhone Mirroring is one of the headlining features of iOS 18 and macOS Sequoia, set to launch to the public this fall. With the latest betas of these updates, Apple has added a new feature to iPhone Mirroring: the ability to enter “jiggle mode” and rearrange your iPhone’s Home Screen.

With iPhone Mirroring enabled you can now long-press on your iPhone’s Home Screen with your Mac’s mouse or trackpad to enter jiggle mode. The feature then works just as it does on your iPhone, allowing you to drag icons and widgets between different pages. You can also adjust widget sizes, manage the new icon tinting feature in iOS 18, and add new widgets.

In previous betas of iOS 18 and macOS Sequoia, the ability to enter jiggle mode and edit your iPhone’s Home Screen wasn’t available. Years ago, you used to be able to edit your iPhone’s Home Screen using iTunes, but that feature was removed during the transition from iTunes to Music.

The ability to edit your iPhone’s Home Screen with iPhone Mirroring is available in the latest betas of iOS 18, iOS 18.1, macOS Sequoia 15, and macOS Sequoia 15.1. iPhone Mirroring is not currently available in the European Union.

There are a few things still missing from iPhone Mirroring in macOS Sequoia and iOS 18, including the ability to access Notification Center and Control Center and edit your iPhone’s Lock Screen. Whether Apple has plans to add those capabilities before macOS Sequoia and iOS 18 are released this fall remains to be seen.

If you’re running the iOS 18 and macOS Sequoia betas, have you found iPhone Mirroring to be a useful new feature? Let us know in the comments.

Follow Chance: Threads, Twitter, Instagram, and Mastodon.

FTC: We use income earning auto affiliate links. More.

]]>
https://happyspin.me/iphone-mirroring-now-lets-you-rearrange-your-home-screen-with-jiggle-mode-in-ios-18-and-macos-sequoia/feed/ 0
G.Skill launches ultra-low-latency RAM for Intel and AMD CPUs — DDR5-6400 32GB memory kit dips to C30 https://happyspin.me/g-skill-launches-ultra-low-latency-ram-for-intel-and-amd-cpus-ddr5-6400-32gb-memory-kit-dips-to-c30/ https://happyspin.me/g-skill-launches-ultra-low-latency-ram-for-intel-and-amd-cpus-ddr5-6400-32gb-memory-kit-dips-to-c30/#respond Wed, 14 Aug 2024 05:28:08 +0000 https://happyspin.me/?p=72377

G.Skill is releasing a new memory kit optimized around low latency timings to rival the best RAM. The memory kit adheres to DDR5-6400 with timings set to 30-39-39-102 and is available in 32GB (2x16GB) capacity. It is one of the lowest latency configurations in G.Skill’s DDR5 lineup.

The new DDR5 6400 CL30 memory kits will be available in Intel-optimized and AMD-optimized memory kits. It will be available in the Trident Z5 RGB and Trident Z5 Royal series with Intel XMP 3.0 support. For AMD systems, Trident Z5 Neo RGB and Tridento Z5 Royal Neo will also get the spec, featuring AMD EXPO support.

The DDR5-6500 memory kit is a competitive product, with a latency of just 9.375 ns. The 9 – 10ns range comprises the fastest DDR5 kits (excluding some exceptions), including kits in the DDR5-8000 range with sub-CL40 timings.

G.Skill’s new memory options will be optimal for AMD’s AM5 platform, particularly for users running AMD’s Ryzen 7000 and Ryzen 9000 series processors. Due to fabric limitations in AMD’s Zen 4 and Zen 5 architectures, DDR5-6000 to DDR5-6400 is considered the limit of what AMD’s AM5 Ryzen processors can achieve while running the Infinity Fabric at a 1:1 ratio. 

AMD allows its CPUs to run at a 2:1 ratio, where the Infinity Fabric runs at half the clock speed of the RAM to get around this limitation, but running the fabric at half the clock rate incurs a performance penalty. Intel’s competing CPUs don’t have these fabric limitations, but regardless, G.Skill’s new configuration, with its very tight timings, will still be advantageous.

This specific memory configuration will be available in only 32GB capacity (for now). Pricing has not been disclosed, but G.Skill’s DDR5-6400 C30 memory kits will debut later this month.

]]>
https://happyspin.me/g-skill-launches-ultra-low-latency-ram-for-intel-and-amd-cpus-ddr5-6400-32gb-memory-kit-dips-to-c30/feed/ 0
Flexible Fiber LEDs made with perovskite quantum wires should enable advanced wearable displays and other technologies https://happyspin.me/flexible-fiber-leds-made-with-perovskite-quantum-wires-should-enable-advanced-wearable-displays-and-other-technologies/ https://happyspin.me/flexible-fiber-leds-made-with-perovskite-quantum-wires-should-enable-advanced-wearable-displays-and-other-technologies/#respond Wed, 14 Aug 2024 05:27:09 +0000 https://happyspin.me/?p=72374

Yesterday, researchers from the School of Engineering of the Hong Kong University of Science and Technology (HKUST) provided exclusive quotes on the release of their study titled Full-color fiber light-emitting diodes based on perovskite quantum wires to TechXplore, where they insist that their “innovative approach (perovskite wiring) for fiber LEDs opens up new possibilities for fabricating unconventional 3D-structured lighting sources, paving the way for advanced wearable display technologies”. This quote was provided by the HKUST Department of Electronic & Computer Engineering and Department of Chemical & Biological Engineering Professor Fan Zhiyong, who also served as lead of the research team for the project’s duration.

The project’s primary purpose was to find a better way to make fiber wire LEDs, also called fi-LEDs. The issue with making these flexible LEDs has always been a matter of material, which often causes uneven or inefficient light emission. Using metal halite perovskites with a porous alumina membrane to create quantum wiring seems to be the correct way to mitigate these downsides, resulting in a series of full-color fi-LEDs. These fi-LEDs are already in such a condition that they should be suitable for textile lighting applications.

Per HKUST, this work in the short-term “presents a significant advancement in the field of fi-LEDs. Future developments could focus on enhancing the efficiency and stability of the fi-LEDs, exploring new perovskite compositions for a broader range of emission colors, and integrating these devices into commercial textile products.” 

In the long-term, advancing fi-LEDs in this manner is also expected to improve future wearable display technologies and unconventionally-shaped lighting sources in general. Even underwater applications for scenarios like swimming pools and scuba diving seem well within the range of possibility, judging by one of the test applications being submerged in water.

With “bendable, stretchable, twistable, and waterproof” fi-LEDs now in play, the future for wearable applications is looking quite strong— though, of course, any practical clothing brand may have second thoughts about integrating fiber LEDs into its clothing. Outside of scenarios where specific uniforms or outfits are in play (a la scuba diving), it’s unlikely that fi-LEDs will introduce a new era of sci-fi fashion trends— but potential practical applications do seem interesting.

]]>
https://happyspin.me/flexible-fiber-leds-made-with-perovskite-quantum-wires-should-enable-advanced-wearable-displays-and-other-technologies/feed/ 0
8.8 million AI-capable PCs shipped in Q2 2024 — firm believes the AI PC market is on track to ship around 44 million units https://happyspin.me/8-8-million-ai-capable-pcs-shipped-in-q2-2024-firm-believes-the-ai-pc-market-is-on-track-to-ship-around-44-million-units/ https://happyspin.me/8-8-million-ai-capable-pcs-shipped-in-q2-2024-firm-believes-the-ai-pc-market-is-on-track-to-ship-around-44-million-units/#respond Wed, 14 Aug 2024 05:26:15 +0000 https://happyspin.me/?p=72371

The noise around AI in general and AI PCs in particular is quite significant these days, as everyone is trying to take advantage of the buzz word and present their products as AI-capable. Technically, 14% of PCs shipped globally in the second quarter contained a neural processing unit (NPU), formally making them AI PCs, reports Canalys.

The latest CPUs from AMD, Apple, Intel, and Qualcomm contain NPUs. As a result, 8.8 million AI-capable PCs were shipped in Q2 2024, comprising 14% of total PC shipments for the quarter, reports Canalys. As AMD, Intel, and Qualcomm ramp up production of their latest processors with NPUs, analysts believe shipments of AI-capable PCs are poised for rapid growth in the latter half of 2024 and into 2025.  

In the Windows segment, AI-capable PC shipments surged 127% sequentially in the second quarter. Lenovo, the world’s largest PC maker, introduced its Snapdragon X Elite-based PCs, including the Yoga Slim 7x and ThinkPad T14s, boosting its AI-capable share to about 6% of total Windows PC shipments, reflecting a 228% growth. HP followed closely with an 8% share, launching the Omnibook X 14 and EliteBook Ultra G1 alongside its Core Ultra devices. With just under 7% share, Dell launched Copilot+ PCs across several lines, though availability was staggered.

“A key benefit from AI-capable PCs that has materialized for PC OEMs is the growth boost within their premium offerings,” said Ishan Dutt, Principal Analyst at Canalys.

He continued, “Windows PC shipments in the above $800 range grew 9% sequentially in Q2 2024, with AI-capable PC shipments in those price bands up 126%. As the range of features from first- and third-party applications that leverage the NPU increase and the benefits to performance and efficiency become clearer, the value proposition for AI-capable PCs shall remain strong. This is especially important over the next 12 months as a significant portion of the installed base will be refreshed as part of the ongoing Windows upgrade cycle.”

Meanwhile, only Qualcomm’s Snapdragon X Elite processors comply with the requirements of Microsoft’s Copilot+ PCs that need a 45 TOPS NPU. AMD and Intel must ship their Copilot+ platforms, the AMD Ryzen AI 300-series, and the Intel Core Ultra 2-series ‘Lunar Lake.’ Copilot+ requirements will likely be the standard for AI PCs, so contemporary AMD and Intel processors have an NPU; software developers will likely target Microsoft requirements, so it remains to be seen how many AI PCs shipped today will be AI-capable in a year or two.

Apple currently leads the AI-capable PC market, as all of its M-series chips have packed an NPU since 2020, and by now, there are tens of millions of Macs with a neural processing unit capable of launching AI workloads. However, while Apple’s own programs use the company’s NPUs, a limited number of third-party programs can take advantage of those units. Canalys believes that the introduction of Apple Intelligence, now in beta in the U.S., is set to enhance AI functionalities for Mac users, potentially scaling quickly once fully launched, given its compatibility with most existing Mac devices.

Analysts from Canalys believe the AI-capable PC market is on track to ship around 44 million units in 2024 and 103 million in 2025. The combination of new product releases, wider price range availability, and growing demand for AI-enhanced features is expected to drive this growth further, solidifying AI’s role in future PCs.

]]>
https://happyspin.me/8-8-million-ai-capable-pcs-shipped-in-q2-2024-firm-believes-the-ai-pc-market-is-on-track-to-ship-around-44-million-units/feed/ 0
Security Bite: Apple (finally) making it harder to override Gatekeeper is a telling move https://happyspin.me/security-bite-apple-finally-making-it-harder-to-override-gatekeeper-is-a-telling-move/ https://happyspin.me/security-bite-apple-finally-making-it-harder-to-override-gatekeeper-is-a-telling-move/#respond Wed, 14 Aug 2024 05:22:57 +0000 https://happyspin.me/?p=72368

9to5Mac Security Bite is exclusively brought to you by Mosyle, the only Apple Unified Platform. Making Apple devices work-ready and enterprise-safe is all we do. Our unique integrated approach to management and security combines state-of-the-art Apple-specific security solutions for fully automated Hardening & Compliance, Next Generation EDR, AI-powered Zero Trust, and exclusive Privilege Management with the most powerful and modern Apple MDM on the market. The result is a totally automated Apple Unified Platform currently trusted by over 45,000 organizations to make millions of Apple devices work-ready with no effort and at an affordable cost. Request your EXTENDED TRIAL today and understand why Mosyle is everything you need to work with Apple.

Last week, Apple confirmed that users on macOS Sequoia will no longer be able to Control-click to override Gatekeeper to open software that isn’t signed or notarized by the company. This was a slight change with what I believe will have a significant impact. It also gives us a glimpse into what might happen behind the scenes at Apple as Mac malware gets more clever and the amount of it reach all-time highs.

I’ve always been baffled by how easily any non-sophisticated Jonny Appleseed user could bypass Mac’s two best security features (Gatekeeper and XProtect) in just two clicks.

This typically happens when a user attempts to download unsigned software, like a pirated application. When they double-click to open it, macOS will present an error message stating, “[application.pkg] can not be open because it is from an unidentified developer.” From here, the user might let out a quick sigh and Google the problem only to find they just have to right-click the package and hit “Open.”

I understand it’s a bit of a catch-22 to say that “non-sophisticated” users would know how to bypass macOS Gatekeeper and the XProtect suite, let alone find and download pirated software. However, what if they thought they were installing a legitimate app, and that’s how it instructed them to open it?

Malware authors are more clever than ever. One of the latest trends is cloning real applications, often productivity apps like Notion or Slack, and injecting malware somewhere in the code. Authors then create install screens like the one below, instructing the user to right-click and open the malware to get around Gatekeeper. The crazy part is that sometimes users will go on to use these applications for quite some time and never know their system has been infected. Persistence is key for cybercriminals.

I wouldn’t put it past my 79-year-old grandmother to be able to do this. Image of Shlayer malware from Jamf.

Now in macOS Sequoia, users will need to independently review the app’s security details in System Settings > Privacy & Security before it is allowed to run. It’s great to finally see Apple taking proactive steps to encourage users to review what they’re installing.

However, is this an indication of how bad malware is getting on the platform? Maybe, but it could also be a move to encourage more developers to submit apps for notarization.

The facts are: In 2023, we witnessed a 50% YoY increase in new macOS malware families. Additionally, Patrick Wardle, founder of Objective-See, told Moonlock Lab that the number of new macOS malware specimens increased by about 100% in 2023 with no signs of a slowdown. And just a few months back, Apple pushed its largest-ever XProtect update with 74 new Yara detection rules.

Regardless, I’ve once brought this up to an employee internally and was not met with much interest. So, I’m glad someone changed their mind, no matter the reason.

More: Apple addresses privacy concerns around Notification Center database in macOS Sequoia

FTC: We use income earning auto affiliate links. More.

]]>
https://happyspin.me/security-bite-apple-finally-making-it-harder-to-override-gatekeeper-is-a-telling-move/feed/ 0
The GPU benchmarks hierarchy 2024: All recent graphics cards ranked https://happyspin.me/the-gpu-benchmarks-hierarchy-2024-all-recent-graphics-cards-ranked/ https://happyspin.me/the-gpu-benchmarks-hierarchy-2024-all-recent-graphics-cards-ranked/#respond Wed, 14 Aug 2024 05:21:25 +0000 https://happyspin.me/?p=72365

GPU Benchmarks & Performance Hierarchy

Our GPU benchmarks hierarchy ranks all the current and previous generation graphics cards by performance, and Tom’s Hardware exhaustively benchmarks current and previous generation GPUs, including all of the best graphics cards. Whether it’s playing games, running artificial intelligence workloads like Stable Diffusion, or doing professional video editing, your graphics card typically plays the biggest role in determining performance — even the best CPUs for Gaming take a secondary role.

Earlier this year, we had what was likely the final ‘refresh’ of the current generation GPUs: Nvidia launched the RTX 4070 Super, RTX 4070 Ti Super, and RTX 4080 Super; and AMD released the RX 7600 XT, plus the RX 7900 GRE arrived in the U.S. We don’t expect any further shakeups in the GPU hierarchy until the fall, when the Nvidia Blackwell RTX 50-series, Intel Battlemage, and AMD RDNA 4 GPUs are likely to arrive — though AMD may not launch until 2025.

We’re also looking to revamp our GPU testing, with new games and possibly a switch in platform. After the Core i9-13900K issues we experienced — we eventually RMA’ed our chip — we’re now looking toward AMD Zen 5 and Arrow Lake (we’ll probably wait for the X3D Zen 5 chips). Until then, our recent reviews use testing from the 13900K testbed with additional games, and we’ve included those results in the charts at the bottom of the page.

Our full GPU hierarchy using traditional rendering (aka, rasterization) comes first, and below that we have our ray tracing GPU benchmarks hierarchy. Those of course require a ray tracing capable GPU so only AMD’s RX 7000/6000-series, Intel’s Arc, and Nvidia’s RTX cards are present. The results are all without enabling DLSS, FSR, or XeSS on the various cards, mind you.

Nvidia’s Ada Lovelace architecture powers its current generation RTX 40-series, with new features like DLSS 3 Frame Generation — and for all RTX cards, Nvidia DLSS 3.5 Ray Reconstruction (which is only used in a few games so far). AMD’s RDNA 3 architecture powers the RX 7000-series, with seven desktop cards filling out the product stack. Intel’s Arc Alchemist architecture brings a third player into the dedicated GPU party, even if it’s more of a competitor to the previous generation midrange offerings.

On page two, you’ll find our 2020–2021 benchmark suite, which has all of the previous generation GPUs running our older test suite running on a Core i9-9900K testbed. It’s no longer being actively updated. We also have the legacy GPU hierarchy (without benchmarks, sorted by theoretical performance) for reference purposes.

The following tables sort everything solely by our performance-based GPU gaming benchmarks, at 1080p “ultra” for the main suite and at 1080p “medium” for the DXR suite. Factors including price, graphics card power consumption, overall efficiency, and features aren’t factored into the rankings here. The current 2024 results use an Alder Lake Core i9-12900K testbed. Now let’s hit the benchmarks and tables.

GPU Benchmarks Ranking 2024

For our latest GPU benchmarks, we’ve tested nearly every GPU released in the past seven years, plus some extras, at 1080p medium and 1080p ultra, and sorted the table by the 1080p ultra results. Where it makes sense, we also test at 1440p ultra and 4K ultra. All of the scores are scaled relative to the top-ranking 1080p ultra card, which in our new suite is the RTX 4090 — especially at 4K and 1440p.

You can also see the above summary chart showing the relative performance of the cards we’ve tested across the past several generations of hardware at 1080p ultra — swipe through the above gallery if you want to see the 1080p medium, 1440p, and 4K ultra images. There are a few missing options (e.g., the GT 1030, RX 550, and several Titan cards), but otherwise, it’s basically complete. Note that we also have data in the table below for some of the other older GPUs.

The eight games we’re using for our standard GPU benchmarks hierarchy are Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11 Nvidia, DX12 AMD/Intel), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The fps score is the geometric mean (equal weighting) of the eight games. Note that the specifications column links directly to our original review for the various GPUs. 

GPU Rasterization Hierarchy, Key TakeawaysNvidia RTX 4090 takes the top spot but costs almost twice as much as the second place RTX 4080 Super.RTX 4090 can encounter CPU bottlenecks at 1440p and especially 1080p.New cards typically match previous gen GPUs that are one or two model tiers “higher” (e.g. RTX 4070 Ti vs. RTX 3090 Ti, or RX 6600 XT vs. RX 5700 XT). This is not universally true, however (e.g. RX 7800 XT is only slightly faster than the prior 6800 XT).Looking at 1440p, RTX 4080 Super ranks as the most efficient GPU, with other 40-series GPUs rounding out the top ten. AMD’s most efficient GPU is the RX 7900 XTX. Intel’s Arc GPUs rank near the bottom of the chart in terms of efficiency.The best GPU value in FPS per dollar at 1440p is the Arc A580, followed by the RX 6600, Arc A750, RX 6800, and RTX 4060.

Swipe to scroll horizontally
Graphics CardLowest Price1080p Ultra1080p Medium1440p Ultra4K UltraSpecifications (Links to Review)GeForce RTX 4090$1849100.0% (154.1fps)100.0% (195.7fps)100.0% (146.1fps)100.0% (114.5fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450WRadeon RX 7900 XTX$90996.7% (149.0fps)97.2% (190.3fps)92.6% (135.3fps)83.1% (95.1fps)Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355WGeForce RTX 4080 Super$99996.2% (148.3fps)98.5% (192.7fps)91.0% (133.0fps)80.3% (91.9fps)AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320WGeForce RTX 4080$118595.4% (147.0fps)98.1% (192.0fps)89.3% (130.4fps)78.0% (89.3fps)AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320WRadeon RX 7900 XT$69993.4% (143.9fps)95.8% (187.6fps)86.1% (125.9fps)71.0% (81.2fps)Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315WGeForce RTX 4070 Ti Super$79992.3% (142.3fps)96.8% (189.4fps)83.5% (122.0fps)68.7% (78.6fps)AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285WGeForce RTX 4070 Ti$69989.8% (138.3fps)95.7% (187.2fps)79.8% (116.5fps)63.8% (73.0fps)AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285WRadeon RX 7900 GRE$54988.1% (135.8fps)94.1% (184.3fps)78.0% (113.9fps)60.5% (69.3fps)Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260WGeForce RTX 4070 Super$58987.1% (134.2fps)94.6% (185.1fps)75.2% (109.8fps)57.8% (66.1fps)AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220WRadeon RX 6950 XT$57984.7% (130.5fps)91.7% (179.4fps)75.3% (110.1fps)58.6% (67.1fps)Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335WGeForce RTX 3090 Ti$173984.7% (130.5fps)90.5% (177.1fps)77.1% (112.7fps)66.3% (75.9fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450WRadeon RX 7800 XT$50983.9% (129.3fps)91.5% (179.1fps)72.4% (105.8fps)54.4% (62.3fps)Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263WGeForce RTX 3090$127981.4% (125.5fps)88.9% (174.0fps)72.5% (106.0fps)61.8% (70.7fps)GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350WRadeon RX 6900 XT$77980.9% (124.6fps)89.6% (175.3fps)69.9% (102.1fps)53.5% (61.2fps)Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300WGeForce RTX 3080 Ti$108980.4% (123.9fps)87.8% (171.8fps)71.1% (103.9fps)60.1% (68.8fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350WRadeon RX 6800 XT$45979.6% (122.7fps)88.5% (173.2fps)67.8% (99.0fps)50.6% (57.9fps)Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300WGeForce RTX 3080 12GB$99979.2% (122.1fps)86.5% (169.4fps)70.0% (102.3fps)58.3% (66.7fps)GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400WGeForce RTX 4070$53979.2% (122.0fps)90.7% (177.5fps)66.9% (97.8fps)50.0% (57.2fps)AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200WGeForce RTX 3080$87976.0% (117.0fps)85.6% (167.6fps)66.0% (96.4fps)54.1% (62.0fps)GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320WRadeon RX 7700 XT$44975.3% (116.1fps)87.7% (171.6fps)63.4% (92.7fps)45.0% (51.5fps)Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245WRadeon RX 6800$39974.4% (114.6fps)86.2% (168.7fps)61.0% (89.2fps)44.3% (50.7fps)Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250WGeForce RTX 3070 Ti$59967.5% (104.0fps)81.6% (159.8fps)56.7% (82.8fps)41.7% (47.7fps)GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290WRadeon RX 6750 XT$35966.8% (102.9fps)82.6% (161.6fps)52.9% (77.2fps)37.4% (42.8fps)Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250WGeForce RTX 4060 Ti 16GB$42965.3% (100.6fps)82.6% (161.7fps)51.8% (75.7fps)36.4% (41.6fps)AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160WGeForce RTX 4060 Ti$37465.1% (100.4fps)81.8% (160.1fps)51.7% (75.6fps)34.6% (39.6fps)AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160WTitan RTX 64.5% (99.3fps)80.0% (156.6fps)54.4% (79.5fps)41.8% (47.8fps)TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280WRadeon RX 6700 XT$33964.3% (99.1fps)80.8% (158.1fps)50.3% (73.4fps)35.3% (40.4fps)Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230WGeForce RTX 3070$39964.1% (98.8fps)79.1% (154.8fps)53.2% (77.7fps)38.8% (44.4fps)GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220WGeForce RTX 2080 Ti 62.5% (96.3fps)77.2% (151.0fps)51.8% (75.6fps)38.0% (43.5fps)TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250WRadeon RX 7600 XT$31959.7% (91.9fps)77.3% (151.2fps)45.1% (65.9fps)32.4% (37.1fps)Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190WGeForce RTX 3060 Ti$44958.9% (90.7fps)75.0% (146.9fps)47.9% (70.0fps) GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200WRadeon RX 6700 10GB$26955.9% (86.1fps)74.4% (145.7fps)43.0% (62.8fps)28.7% (32.9fps)Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175WGeForce RTX 2080 Super 55.8% (86.0fps)72.2% (141.3fps)45.2% (66.1fps)32.1% (36.7fps)TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250WGeForce RTX 4060$29455.1% (84.9fps)72.7% (142.3fps)41.9% (61.2fps)27.8% (31.9fps)AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115WGeForce RTX 2080 53.5% (82.5fps)69.8% (136.7fps)43.2% (63.2fps) TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215WRadeon RX 7600$26953.2% (82.0fps)72.3% (141.4fps)39.2% (57.3fps)25.4% (29.1fps)Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165WRadeon RX 6650 XT$22950.4% (77.7fps)70.0% (137.1fps)37.3% (54.5fps) Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180WGeForce RTX 2070 Super 50.3% (77.4fps)66.2% (129.6fps)40.0% (58.4fps) TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215WIntel Arc A770 16GB$28949.9% (76.9fps)59.4% (116.4fps)41.0% (59.8fps)30.8% (35.3fps)ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225WIntel Arc A770 8GB$34348.9% (75.3fps)59.0% (115.5fps)39.3% (57.5fps)29.0% (33.2fps)ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225WRadeon RX 6600 XT$39948.5% (74.7fps)68.2% (133.5fps)35.7% (52.2fps) Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160WRadeon RX 5700 XT 47.6% (73.3fps)63.8% (124.9fps)36.3% (53.1fps)25.6% (29.3fps)Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225WGeForce RTX 3060$29946.9% (72.3fps)61.8% (121.0fps)36.9% (54.0fps) GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170WIntel Arc A750$23945.9% (70.8fps)56.4% (110.4fps)36.7% (53.7fps)27.2% (31.1fps)ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225WGeForce RTX 2070 45.3% (69.8fps)60.8% (119.1fps)35.5% (51.8fps) TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175WRadeon VII 45.1% (69.5fps)58.2% (113.9fps)36.3% (53.0fps)27.5% (31.5fps)Vega 20, 3840 shaders, 1750MHz, 16GB [email protected], 1024GB/s, 300WGeForce GTX 1080 Ti 43.1% (66.4fps)56.3% (110.2fps)34.4% (50.2fps)25.8% (29.5fps)GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250WGeForce RTX 2060 Super 42.5% (65.5fps)57.2% (112.0fps)33.1% (48.3fps) TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175WRadeon RX 6600$19942.3% (65.2fps)59.3% (116.2fps)30.6% (44.8fps) Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132WIntel Arc A580$16942.3% (65.1fps)51.6% (101.1fps)33.4% (48.8fps)24.4% (27.9fps)ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185WRadeon RX 5700 41.9% (64.5fps)56.6% (110.8fps)31.9% (46.7fps) Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180WRadeon RX 5600 XT 37.5% (57.8fps)51.1% (100.0fps)28.8% (42.0fps) Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160WRadeon RX Vega 64 36.8% (56.7fps)48.2% (94.3fps)28.5% (41.6fps)20.5% (23.5fps)Vega 10, 4096 shaders, 1546MHz, 8GB [email protected], 484GB/s, 295WGeForce RTX 2060 36.0% (55.5fps)51.4% (100.5fps)27.5% (40.1fps) TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160WGeForce GTX 1080 34.4% (53.0fps)45.9% (89.9fps)27.0% (39.4fps) GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180WGeForce RTX 3050$22433.7% (51.9fps)45.4% (88.8fps)26.4% (38.5fps) GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130WGeForce GTX 1070 Ti 33.1% (51.1fps)43.8% (85.7fps)26.0% (37.9fps) GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180WRadeon RX Vega 56 32.8% (50.6fps)43.0% (84.2fps)25.3% (37.0fps) Vega 10, 3584 shaders, 1471MHz, 8GB [email protected], 410GB/s, 210WGeForce GTX 1660 Super 30.3% (46.8fps)43.7% (85.5fps)22.8% (33.3fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125WGeForce GTX 1660 Ti 30.3% (46.6fps)43.3% (84.8fps)22.8% (33.3fps) TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120WGeForce GTX 1070 29.0% (44.7fps)38.3% (75.0fps)22.7% (33.1fps) GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150WGeForce GTX 1660 27.7% (42.6fps)39.7% (77.8fps)20.8% (30.3fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120WRadeon RX 5500 XT 8GB 25.7% (39.7fps)36.8% (72.1fps)19.3% (28.2fps) Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130WRadeon RX 590 25.5% (39.3fps)35.0% (68.5fps)19.9% (29.0fps) Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225WGeForce GTX 980 Ti 23.3% (35.9fps)32.0% (62.6fps)18.2% (26.6fps) GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250WRadeon RX 580 8GB 22.9% (35.3fps)31.5% (61.7fps)17.8% (26.0fps) Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185WRadeon R9 Fury X 22.9% (35.2fps)32.6% (63.8fps)  Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275WGeForce GTX 1650 Super 22.0% (33.9fps)34.6% (67.7fps)14.5% (21.2fps) TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100WRadeon RX 5500 XT 4GB 21.6% (33.3fps)34.1% (66.8fps)  Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130WGeForce GTX 1060 6GB 20.8% (32.1fps)29.5% (57.7fps)15.8% (23.0fps) GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120WRadeon RX 6500 XT$15919.9% (30.6fps)33.6% (65.8fps)12.3% (18.0fps) Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107WRadeon R9 390 19.3% (29.8fps)26.1% (51.1fps)  Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275WGeForce GTX 980 18.7% (28.9fps)27.4% (53.6fps)  GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165WGeForce GTX 1650 GDDR6 18.7% (28.8fps)28.9% (56.6fps)  TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75WIntel Arc A380$11918.4% (28.4fps)27.7% (54.3fps)13.3% (19.5fps) ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75WRadeon RX 570 4GB 18.2% (28.1fps)27.4% (53.6fps)13.6% (19.9fps) Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150WGeForce GTX 1650 17.5% (27.0fps)26.2% (51.3fps)  TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75WGeForce GTX 970 17.2% (26.5fps)25.0% (49.0fps)  GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145WRadeon RX 6400$12415.7% (24.1fps)26.1% (51.1fps)  Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53WGeForce GTX 1050 Ti 12.9% (19.8fps)19.4% (38.0fps)  GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75WGeForce GTX 1060 3GB  26.8% (52.5fps)  GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120WGeForce GTX 1630 10.9% (16.9fps)17.3% (33.8fps)  TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75WRadeon RX 560 4GB 9.6% (14.7fps)16.2% (31.7fps)  Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80WGeForce GTX 1050  15.2% (29.7fps)  GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75WRadeon RX 550 4GB  10.0% (19.5fps)  Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50WGeForce GT 1030  7.5% (14.6fps)  GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W

*: GPU couldn’t run all tests, so the overall score is slightly skewed at 1080p ultra.

While the RTX 4090 does technically take first place at 1080p ultra, it’s the 1440p and especially 4K numbers that impress. It’s less than 2% faster than the RTX 4080 Super at 1080p ultra, but that increases to 9% at 1440p and then 25% at 4K. Also note that the fps numbers in our table incorporate both the average and minimum fps into a single score — with the average given more weight than the 1% low fps.

Again, keep in mind that we’re not including any ray tracing or DLSS results in the above table, as we use the same test suite with the same settings on all current and previous generation graphics cards. Since only RTX cards support DLSS (and RTX 40-series if you want DLSS 3), that would drastically limit which cards we could directly compare. You can see DLSS 2/3 and FSR 2 upscaling results in our RTX 4070 review if you want to check out how the various upscaling modes might help.

Of course the RTX 4090 comes at a steep price, though it’s not that much worse than the previous generation RTX 3090. In fact, we’d say it’s a lot better in some respects, as the 3090 was only a minor improvement in performance compared to the 3080 at the time of launch, but with more than double the VRAM. Nvidia pulled out all the stops with the 4090, increasing the core counts, clock speeds, and power limits to push it beyond all contenders. There are two problems with the 4090, however: It’s not available at MSRP any longer, due to demand from the AI sector — it often costs $2,000 or more — and there are still concerns with pulling 450W of power over the 16-pin connector.

Stepping down from the RTX 4090, the RTX 4080 Super and RX 7900 XTX trade blows at higher resolutions, while CPU bottlenecks come into play at 1080p. We’ll be switching to an i9-13900K in the near future, and you can see those results in our latest graphics card reviews as well as in the charts at the bottom of the page.

Intel

Outside of the latest releases from AMD and Nvidia, the RX 6000- and RTX 30-series chips still perform reasonably well and if you’re using such a card, there may not be any need to upgrade at present. Intel’s Arc GPUs also fall into this category and are something of a wild card.

We’ve been testing and retesting GPUs periodically, and the Arc chips running the latest drivers now complete all of our benchmarks without any major anomalies. (Minecraft was previously a problem, though Intel has finally sorted that out.) They’re not great on efficiency, but overall performance and pricing for the A750 is quite good.

Turning to the previous generation GPUs, the RTX 20-series and GTX 16-series chips end up scattered throughout the results, along with the RX 5000-series. The general rule of thumb is that you get one or two “model upgrades” with the newer architectures, so for example the RTX 2080 Super comes in just below the RTX 3060 Ti, while the RX 5700 XT basically matches the newer and less expensive RX 6600 XT.

Go back far enough and you can see how modern games at ultra settings severely punish cards that don’t have more than 4GB VRAM. We’ve been saying for a few years now that 4GB was just scraping by, and these days we’d avoid buying anything with less than 8GB of VRAM — 12GB or more is the minimum we’d want with a mainstream GPU, and 16GB or more for high-end and above. Old cards like the GTX 1060 3GB and GTX 1050 actually failed to run some of our tests, which skews their results a bit, even though they do better at 1080p medium.

Now let’s switch over to the ray tracing hierarchy.

Dying Light 2 settings and image quality comparisons

Ray Tracing GPU Benchmarks Ranking 2024

Enabling ray tracing, particularly with demanding games like many of those we’re using in our DXR test suite, can cause framerates to drop off a cliff. We’re testing with “medium” and “ultra” ray tracing settings. Medium generally means using the medium graphics preset but turning on ray tracing effects (set to “medium” if that’s an option; otherwise, “on”), while ultra turns on all of the RT options at more or less maximum quality.

Because ray tracing is so much more demanding, we’re sorting these results by the 1080p medium scores. That’s also because the RX 6500 XT, RX 6400, and Arc A380 basically can’t handle ray tracing even at these settings, and testing at anything more than 1080p medium would be fruitless (though we’ve done 1080p ultra just for fun if you check the charts at the end).

The five ray tracing games we’re using are Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft — all of these use the DirectX 12 / DX12 Ultimate API. The fps score is the geometric mean (equal weighting) of the five games, and the percentage is scaled relative to the fastest GPU in the list, which again is the GeForce RTX 4090.

If you want to see what the future may hold with ray tracing, check out our Alan Wake 2 benchmarks where the full path tracing barely manages playable performance even with upscaling on non-Nvidia GPUs.

GPU Ray Tracing Hierarchy, Key TakeawaysNvidia absolutely dominates in ray tracing performance, with the RTX 4070 Super and above taking down AMD’s best AMD RX 7900 XTX, which sits at position 11. Intel’s Arc A770 lands at number 31.DLSS 2 upscaling with quality mode is supported in most ray tracing games and can boost performance an additional 30~50 percent (depending on the game, resolution, and settings used). FSR 2 and XeSS support can provide a similar uplift, but FSR 2 is only in about a third as many games right now, and XeSS support is even less common.You’ll need an RTX 4070 or RTX 3080 or faster GPU to handle 1080p with maxed out settings at 60 fps or more, which also means Performance mode upscaling can make 4K viable.RTX 4080 again ranks as the most efficient GPU, followed by the rest of the 40-series cards. Even the RTX 3060, 3060 Ti, and 3070 rank ahead of AMD’s best, which is the RX 7900 XT. Intel’s Arc GPUs are still pretty far down the efficiency list, though in DXR they’re often better than AMD’s RX 6000-series parts.The best overall ray tracing “value” in FPS per dollar (at 1080p ultra) goes to the Arc A580 again, followed by the RTX 4060, RTX 4070, Arc A750, and RTX 4060 Ti.

Swipe to scroll horizontally
Graphics CardLowest Price1080p Medium1080p Ultra1440p Ultra4K UltraSpecifications (Links to Review)GeForce RTX 4090$1849100.0% (165.9fps)100.0% (136.3fps)100.0% (103.9fps)100.0% (55.9fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450WGeForce RTX 4080 Super$99986.8% (144.0fps)85.3% (116.3fps)75.6% (78.6fps)70.5% (39.4fps)AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320WGeForce RTX 4080$118585.4% (141.6fps)83.4% (113.6fps)73.1% (76.0fps)67.7% (37.8fps)AD103, 9728 shaders, 2505MHz, 16GB [email protected], 717GB/s, 320WGeForce RTX 4070 Ti Super$79977.3% (128.2fps)73.5% (100.3fps)63.5% (66.0fps)58.4% (32.6fps)AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285WGeForce RTX 3090 Ti$173971.9% (119.3fps)68.4% (93.2fps)59.6% (62.0fps)56.9% (31.8fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450WGeForce RTX 4070 Ti$69971.5% (118.6fps)67.1% (91.6fps)56.9% (59.1fps)52.3% (29.2fps)AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285WGeForce RTX 4070 Super$58968.1% (113.0fps)62.7% (85.6fps)52.4% (54.5fps)47.8% (26.7fps)AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220WGeForce RTX 3090$127967.7% (112.4fps)63.5% (86.6fps)55.1% (57.2fps)51.8% (28.9fps)GA102, 10496 shaders, 1695MHz, 24GB [email protected], 936GB/s, 350WGeForce RTX 3080 Ti$108966.5% (110.4fps)62.2% (84.8fps)53.2% (55.3fps)48.6% (27.1fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350WRadeon RX 7900 XTX$90966.1% (109.6fps)61.7% (84.1fps)53.2% (55.3fps)48.6% (27.2fps)Navi 31, 6144 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355WGeForce RTX 3080 12GB$99964.9% (107.6fps)59.9% (81.7fps)50.8% (52.8fps)46.3% (25.8fps)GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400WGeForce RTX 4070$53961.2% (101.4fps)54.2% (73.9fps)45.1% (46.9fps)40.7% (22.7fps)AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200WRadeon RX 7900 XT$69960.4% (100.3fps)55.3% (75.3fps)46.7% (48.5fps)41.6% (23.3fps)Navi 31, 5376 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315WGeForce RTX 3080$87960.2% (99.8fps)54.5% (74.3fps)46.1% (47.9fps)41.8% (23.3fps)GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320WRadeon RX 7900 GRE$54952.9% (87.7fps)46.8% (63.7fps)39.6% (41.2fps)35.7% (19.9fps)Navi 31, 5120 shaders, 2245MHz, 16GB GDDR6@18Gbps, 576GB/s, 260WGeForce RTX 3070 Ti$59950.6% (84.0fps)43.0% (58.6fps)35.7% (37.1fps) GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290WRadeon RX 6950 XT$57948.3% (80.1fps)41.4% (56.4fps)34.3% (35.7fps)31.0% (17.3fps)Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335WGeForce RTX 3070$39947.2% (78.2fps)39.9% (54.4fps)32.8% (34.1fps) GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220WRadeon RX 7800 XT$50946.7% (77.5fps)41.9% (57.1fps)34.9% (36.3fps)31.0% (17.3fps)Navi 32, 3840 shaders, 2430MHz, 16GB [email protected], 624GB/s, 263WRadeon RX 6900 XT$77945.4% (75.4fps)38.3% (52.3fps)32.1% (33.3fps)28.8% (16.1fps)Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300WGeForce RTX 4060 Ti$37445.2% (75.1fps)38.7% (52.8fps)32.3% (33.5fps)24.8% (13.9fps)AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160WGeForce RTX 4060 Ti 16GB$42945.2% (75.0fps)38.8% (53.0fps)32.7% (34.0fps)29.5% (16.5fps)AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160WTitan RTX 44.8% (74.4fps)39.1% (53.3fps)33.7% (35.0fps)31.2% (17.4fps)TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280WGeForce RTX 2080 Ti 42.7% (70.9fps)37.2% (50.7fps)31.6% (32.9fps) TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250WRadeon RX 6800 XT$45942.2% (70.0fps)35.6% (48.5fps)29.9% (31.1fps)26.8% (15.0fps)Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300WGeForce RTX 3060 Ti$44941.9% (69.5fps)35.0% (47.7fps)28.8% (30.0fps) GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200WRadeon RX 7700 XT$44941.3% (68.4fps)36.5% (49.7fps)30.6% (31.8fps)27.2% (15.2fps)Navi 32, 3456 shaders, 2544MHz, 12GB GDDR6@18Gbps, 432GB/s, 245WRadeon RX 6800$39936.3% (60.1fps)30.2% (41.2fps)25.4% (26.3fps) Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250WGeForce RTX 2080 Super 35.8% (59.4fps)30.8% (42.0fps)26.1% (27.1fps) TU104, 3072 shaders, 1815MHz, 8GB [email protected], 496GB/s, 250WGeForce RTX 4060$29435.4% (58.8fps)30.6% (41.7fps)24.9% (25.8fps) AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115WGeForce RTX 2080 34.4% (57.1fps)29.1% (39.7fps)24.6% (25.5fps) TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215WIntel Arc A770 8GB$34332.7% (54.2fps)28.4% (38.7fps)24.0% (24.9fps) ACM-G10, 4096 shaders, 2400MHz, 8GB GDDR6@16Gbps, 512GB/s, 225WIntel Arc A770 16GB$28932.6% (54.1fps)28.3% (38.6fps)25.3% (26.2fps) ACM-G10, 4096 shaders, 2400MHz, 16GB [email protected], 560GB/s, 225WGeForce RTX 3060$29931.7% (52.5fps)25.7% (35.1fps)21.1% (22.0fps) GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170WGeForce RTX 2070 Super 31.6% (52.4fps)26.8% (36.6fps)22.3% (23.1fps) TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215WIntel Arc A750$23930.7% (51.0fps)26.8% (36.6fps)22.6% (23.5fps) ACM-G10, 3584 shaders, 2350MHz, 8GB GDDR6@16Gbps, 512GB/s, 225WRadeon RX 6750 XT$35930.0% (49.8fps)25.3% (34.5fps)20.7% (21.5fps) Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250WRadeon RX 6700 XT$33928.1% (46.6fps)23.7% (32.3fps)19.1% (19.9fps) Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230WGeForce RTX 2070 27.9% (46.3fps)23.5% (32.1fps)19.7% (20.4fps) TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175WIntel Arc A580$16927.5% (45.6fps)24.0% (32.7fps)20.3% (21.1fps) ACM-G10, 3072 shaders, 2300MHz, 8GB GDDR6@16Gbps, 512GB/s, 185WGeForce RTX 2060 Super 26.8% (44.5fps)22.4% (30.5fps)18.5% (19.3fps) TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175WRadeon RX 7600 XT$31926.6% (44.2fps)22.6% (30.8fps)18.3% (19.0fps)16.0% (8.9fps)Navi 33, 2048 shaders, 2755MHz, 16GB GDDR6@18Gbps, 288GB/s, 190WRadeon RX 6700 10GB$26925.9% (42.9fps)21.4% (29.2fps)16.8% (17.5fps) Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175WGeForce RTX 2060 23.2% (38.4fps)18.6% (25.4fps)  TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160WRadeon RX 7600$26923.1% (38.3fps)18.9% (25.7fps)14.7% (15.2fps) Navi 33, 2048 shaders, 2655MHz, 8GB GDDR6@18Gbps, 288GB/s, 165WRadeon RX 6650 XT$22922.7% (37.6fps)18.8% (25.6fps)  Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180WGeForce RTX 3050$22422.3% (36.9fps)18.0% (24.6fps)  GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130WRadeon RX 6600 XT$39922.1% (36.7fps)18.2% (24.8fps)  Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160WRadeon RX 6600$19918.6% (30.8fps)15.2% (20.7fps)  Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132WIntel Arc A380$11911.0% (18.3fps)   ACM-G11, 1024 shaders, 2450MHz, 6GB [email protected], 186GB/s, 75WRadeon RX 6500 XT$1595.9% (9.9fps)   Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107WRadeon RX 6400$1245.0% (8.3fps)   Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W

If you felt the RTX 4090 performance was impressive at 4K in our standard test suite, just take a look at the results with ray tracing. Nvidia put even more ray tracing enhancements into the Ada Lovelace architecture, and those start to show up here. There are still further potential performance improvements for ray tracing with SER, OMM, and DMM — not to mention DLSS 3, though that ends up being a bit of a mixed bag, since the generated frames don’t include new user input and add latency.

If you want a real kick in the pants, we also ran many of the faster ray tracing GPUs through Cyberpunk 2077’s RT Overdrive mode, which implements full “path tracing” (full ray tracing, without any rasterization) — as well as Alan Wake 2, which also uses path tracing at higher settings. That provides a glimpse of how future games could behave, and why upscaling and AI techniques like frame generation are here to stay.

Even at 1080p medium, a relatively tame setting for DXR (DirectX Raytracing), the RTX 4090 roars past all contenders and leads the previous generation RTX 3090 Ti by 41%. At 1080p ultra, the lead grows to 53%, and it’s nearly 64% at 1440p. Nvidia made claims before the RTX 4090 launch that it was “2x to 4x faster than the RTX 3090 Ti” — factoring in DLSS 3’s Frame Generation technology — but even without DLSS 3, the 4090 is 72% faster than the 3090 Ti at 4K.

AMD continues to relegate DXR and ray tracing to secondary status, focusing more on improving rasterization performance — and on reducing manufacturing costs through the use of chiplets on the new RDNA 3 GPUs. As such, the ray tracing performance from AMD isn’t particularly impressive. The top RX 7900 XTX basically matches Nvidia’s previous generation RTX 3080 12GB, which puts it barely ahead of the RTX 4070 — and that’s not even in all DXR games. There are some minor improvements for RT performance in RDNA 3, though, as the 7800 XT for example ends up basically tied with the RX 6800 XT in rasterization performance but is 10% faster in DXR performance.

Intel’s Arc A7-series parts show a decent blend of performance in general, with the A750 coming in ahead of the RTX 3060 overall. With the latest drivers (and with vsync forced off in the options.txt file), Minecraft performance also looks much more in line with the other Arc DXR results.

Nvidia GeForce RTX 4090 Founders Edition

You can also see what DLSS Quality mode did for performance in DXR games on the RTX 4090 in our review, but the short summary is that it boosted performance by 78% at 4K ultra. DLSS 3 frame generation improved framerates another 30% to 100% in our preview testing, though we recommend exercising (extreme) caution when looking at FPS with the feature enabled. It can dramatically boost framerates in benchmarks, but when actually playing games it often doesn’t feel much faster than without the feature.

Overall, with DLSS 2, the 4090 in our ray tracing test suite is nearly four times as fast as AMD’s RX 7900 XTX. Ouch. AMD’s FSR 2 and FSR 3 can help as well, and AMD continues to work on increasing the rate of adoption, but it still trails DLSS both in the number of games supported and in the overall image quality. Only two of the games in our DXR suite have FSR2 support. By comparison, all of the DXR games we’re testing support DLSS2 — and one also supports DLSS3.

Without FSR2, AMD’s fastest GPUs can only clear 60 fps at 1080p ultra, while remaining decently playable at 1440p with 40–50 fps on average. But native 4K DXR remains out of reach for just about every GPU, with only the 3090 Ti and above breaking the 30 fps mark on the composite score — and a couple of games still come up short on the 3090 Ti.

AMD also has FSR 3 frame generation. Like DLSS3, it adds latency, and AMD requires the integration of Anti-Lag+ support in games that use FSR 3. But Anti-Lag+ only works with AMD GPUs, which means non-AMD cards will likely incur a larger latency penalty. We’ve tested it in Avatar: Frontiers of Pandora and found it worked pretty well, but that was not the case in Forspoken and Immortals of Aveum.

The midrange GPUs like the RTX 3070 and RX 6700 XT basically manage 1080p ultra and not much more, while the bottom tier of DXR-capable GPUs barely manage 1080p medium — and the RX 6500 XT can’t even do that, with single digit framerates in most of our test suite, and one game that wouldn’t even work at our chosen “medium” settings. (Control requires at least 6GB VRAM to let you enable ray tracing.)

Intel’s Arc A380 ends up just ahead of the RX 6500 XT in ray tracing performance, which is interesting considering it only has 8 RTUs going up against AMD’s 16 Ray Accelerators. Intel posted a deep dive into its ray tracing hardware, and Arc seems reasonably impressive, except for the fact that the number of RTUs severely limits performance. The top-end A770 still only has 32 RTUs, which proves sufficient for it to pull ahead (barely) of the RTX 3060 in DXR testing, but it can’t go much further than that. Arc A750 and above also ends up ahead of AMD’s RX 6750 XT in DXR performance, showing just how poor AMD’s RDNA 2 hardware is when it comes to ray tracing.

It’s also interesting to look at the generational performance of Nvidia’s RTX cards. The slowest 20-series GPU, the RTX 2060, still outperforms the newer RTX 3050 by a bit, but the fastest RTX 2080 Ti comes in a bit behind the RTX 3070. Where the 2080 Ti basically doubled the performance of the 2060, the 3090 delivers about triple the performance of the 3050.

GPU Testbed

Test System and How We Test for GPU Benchmarks

We’ve used two different PCs for our testing. The latest 2022–2024 configuration uses an Alder Lake CPU and platform, while our previous testbed uses Coffee Lake and Z390. Here are the details of the two PCs.

Tom’s Hardware 2022–2024 GPU Testbed

Intel Core i9-12900KMSI Pro Z690-A WiFi DDR4Corsair 2x16GB DDR4-3600 CL16Crucial P5 Plus 2TBCooler Master MWE 1250 V2 GoldCooler Master PL360 FluxCooler Master HAF500Windows 11 Pro 64-bit

Tom’s Hardware 2020–2021 GPU Testbed

Intel Core i9-9900KCorsair H150i Pro RGBMSI MEG Z390 AceCorsair 2x16GB DDR4-3200XPG SX8200 Pro 2TBWindows 10 Pro (21H1)

For each graphics card, we follow the same testing procedure. We run one pass of each benchmark to “warm up” the GPU after launching the game, then run at least two passes at each setting/resolution combination. If the two runs are basically identical (within 0.5% or less difference), we use the faster of the two runs. If there’s more than a small difference, we run the test at least twice more to determine what “normal” performance is supposed to be.

We also look at all the data and check for anomalies, so for example RTX 3070 Ti, RTX 3070, and RTX 3060 Ti all generally going to perform within a narrow range — 3070 Ti is about 5% faster than 3070, which is about 5% faster than 3060 Ti. If we see games where there are clear outliers (i.e. performance is more than 10% higher for the cards just mentioned), we’ll go back and retest whatever cards are showing the anomaly and figure out what the “correct” result would be.

Due to the length of time required for testing each GPU, updated drivers and game patches inevitably will come out that can impact performance. We periodically retest a few sample cards to verify our results are still valid, and if not, we go through and retest the affected game(s) and GPU(s). We may also add games to our test suite over the coming year, if one comes out that is popular and conducive to testing — see our what makes a good game benchmark for our selection criteria.

GPU Benchmarks: Individual Game Charts

The above tables provide a summary of performance, but for those that want to see the individual game charts, for both the standard and ray tracing test suites, we’ve got those as well. We’re only including more recent GPUs in these charts, as otherwise things get very messy. These are also using our new test PC, which changes the performance slightly from the above table, simply because our newest tests are more relevant (but haven’t been run on a lot of the older GPUs shown in the tables).

These charts are up to date as of August 13, 2024.

GPU Benchmarks — 1080p Medium

GPU Benchmarks — 1080p Ultra

GPU Benchmarks — 1440p Ultra

GPU Benchmarks — 4K Ultra

GPU Benchmarks — Power, Clocks, and Temperatures

Most of our discussion has focused on performance, but for those interested in power and other aspects of the GPUs, here are the appropriate charts.

If you’re looking for the legacy GPU hierarchy, head over to page two! We moved it to a separate page to help improve load times in our CMS as well as for the main website. And if you’re looking to comment on the GPU benchmarks hierarchy, head over to our forums and join the discussion!

Choosing a Graphics Card

Which graphics card do you need? To help you decide, we created this GPU benchmarks hierarchy consisting of dozens of GPUs from the past four generations of hardware. Not surprisingly, the fastest cards are from the latest Nvidia Ada Lovelace and AMD RDNA 3 architectures. AMD’s graphics cards perform well without ray tracing, but tend to fall behind once RT gets enabled — even more so if you enable DLSS, which you probably should, though FSR2 is a reasonable alternative. GPU prices are finally hitting reasonable levels, however, making it a better time to upgrade.

Of course it’s not just about playing games. Many applications use the GPU for other work, and we cover professional GPU benchmarks in our full GPU reviews. But a good graphics card for gaming will typically do equally well in complex GPU computational workloads. Buy one of the top cards and you can run games at high resolutions and frame rates with the effects turned all the way up, and you’ll be able to do content creation work as needed. Drop down to the middle and lower portions of the list and you’ll need to start dialing down the settings to get acceptable performance in regular game play and GPU benchmarks.

If your main goal is gaming, you can’t forget about the CPU. Getting the best possible gaming GPU won’t help you much if your CPU is underpowered and/or out of date. So be sure to check out the Best CPUs for gaming page, as well as our CPU Benchmarks Hierarchy to make sure you have the right CPU for the level of gaming you’re looking to achieve.

]]>
https://happyspin.me/the-gpu-benchmarks-hierarchy-2024-all-recent-graphics-cards-ranked/feed/ 0
Shareshot is an iOS app that transforms how you share iPhone and iPad screenshots https://happyspin.me/shareshot-is-an-ios-app-that-transforms-how-you-share-iphone-and-ipad-screenshots/ https://happyspin.me/shareshot-is-an-ios-app-that-transforms-how-you-share-iphone-and-ipad-screenshots/#respond Wed, 14 Aug 2024 05:21:09 +0000 https://happyspin.me/?p=72362

When you take a screenshot on your iPhone or iPad, iOS provides some tools such as cropping and markup. However, there’s no way to add device frames to screenshots right from the system. But with the new Shareshot app, users can do just that in an extremely intuitive way.

Hands-on with the Shareshot app

Whether you’re a developer or someone who works with social media, you’ve probably had to put a device frame on an iPhone or iPad screenshot. You can do this manually using image editing software such as Photoshop, or you can use Federico Viticci’s popular Apple Frames shortcut – which is free and very useful.

With this in mind, the developers behind Shareshot decided to create a tool that makes this process much faster, easier and more fun.

One of the things I liked most about Shareshot is that it’s very intuitive. For example, the app shows you the option to import your most recent screenshot right after you open it. And if you take a new screenshot while the app is in the background, it asks you if you want to put it in a frame.

Editing a screenshot is super easy, and Shareshot gives you a lot of options. For example, you can export a framed screenshot with a transparent background or choose between putting it in a vertical, landscape or square image. There are many gradient backgrounds available, or you can choose your own image from the gallery.

Shareshot is an iOS app that transforms how you share iPhone and iPad screenshots

You can also make the frame larger or smaller and even change the direction and intensity of the shadow. All this with just a few taps.

Once you’re done, Shareshot lets you save the image or share it with someone else. It’s definitely a nice tool to have.

There are some limitations, but…

Version 1.0 of the app still has some limitations. For instance, it can only frame one screenshot at a time. Also, there’s no way to frame screen recordings (at least not yet). Still, the app is fantastic and you should probably give it a try.

Shareshot is available to try for free on the App Store. It works with iPhone, iPad, and even Apple Vision Pro. A subscription of $1.99 per month or $14.99 per year is required to remove the free version watermark.

FTC: We use income earning auto affiliate links. More.

]]>
https://happyspin.me/shareshot-is-an-ios-app-that-transforms-how-you-share-iphone-and-ipad-screenshots/feed/ 0
Apple Intelligence is more of a concept than a reality https://happyspin.me/apple-intelligence-is-more-of-a-concept-than-a-reality/ https://happyspin.me/apple-intelligence-is-more-of-a-concept-than-a-reality/#respond Wed, 14 Aug 2024 05:18:39 +0000 https://happyspin.me/?p=72359

Over the last few years, everyone has been talking about generative AI. Not only that, everyone was doing something about generative AI. Me and a lot of other people then started wondering when Apple would do something about it. At WWDC 2024 in June, the company announced Apple Intelligence. But so far, Apple Intelligence remains more of a concept than a reality. And here’s why.

Apple Intelligence recap

By now, you’re probably familiar with Apple Intelligence. It’s the set of AI-based tools made by Apple. It includes things like Writing Tools for proofreading and rewriting text, summarization of articles, emails and notifications, Genmoji, and even an app for generating illustrations.

Apple has also promised a brand new Siri powered by Apple Intelligence. This Siri is expected to be much smarter, more natural, faster and more reliable. The company also showed how the new Siri will be able to integrate deeply with apps, so that users will be able to ask the assistant to apply a filter to a specific photo.

When I saw all this during the keynote in June, I was really excited. After all, it was a bit disappointing to see all the competitors launching generative AI tools while Apple wasn’t showing much beyond promises. But then the WWDC keynote was over, and Apple Intelligence was still a concept.

Apple Intelligence | OpenAI ChatGPT | Google Gemini

A limited preview

The first beta versions of iOS 18 didn’t include any Apple Intelligence features, and the company was silent about when they would become available. A few weeks ago, the company confirmed that Apple Intelligence wouldn’t be ready in time for the September release of iOS 18.0. It has therefore introduced a new iOS 18.1 beta with Apple Intelligence. With asterisks.

Some limitations were already expected. Apple said in June that Apple Intelligence would only be available in US English first (which is bummer, but we’ll get to that later). But once the beta became available to developers, we realized that pretty much nothing that was announced in June is working in the current betas.

Right now, if you’re running iOS 18.1 beta with Apple Intelligence, all you can do is rewrite texts and summarize articles or notifications. There are some improvements to the Photos app (which now has a more natural search, which is amazing) and Siri has a new animation, as well as better understanding of contextual requests.

But the big promises for Apple Intelligence remain promises. We still can’t try out things like Genmoji, Image Playgrounds, erasing objects from photos, the new Siri with generative AI, or ChatGPT integration. And all we know is that these features will arrive “over the course of the next year.”

Apple Intelligence

Apple Intelligence will struggle to catch up with its competitors

We’re talking about beta software. But the point here is that Apple is lagging behind its competitors. And in that aspect, it’s too far behind. What OpenAI does with ChatGPT is impressive (and Apple itself acknowledges this). Google has also announced some really cool new features for its Gemini AI assistant, and they’re being rolled out to users starting today.

These competitors are constantly introducing new features, while Apple seems to be struggling to introduce what the rest of the industry has already been doing for at least a year.

One thing that really struck me watching the latest Samsung and Google events is how they emphasize that their respective generative AIs already work in many countries and regions. Apple Intelligence is expected to remain available only in the US until next year.

After watching Google’s latest AI announcements, it’s hard to believe Apple is anything other than 2-3 years behind in this area — at least. https://t.co/fYxdQQ7CgC

— Mark Gurman (@markgurman) August 13, 2024

While Apple Intelligence will be partially available to users by the end of this year, competitors will have even more features and support for even more languages by then. And that says a lot about Apple in recent years.

Some might argue that Apple has waited this long to do something better, or to do it right. While things like Apple’s Private Cloud Compute are certainly impressive, the perception as an end user is that Apple Intelligence isn’t doing anything that other AIs already can. I also don’t feel that the tools we have right now are any better.

I really hope to see Apple Intelligence getting better and running in more languages as soon as possible. However, given how Apple has handled everything recently, the short-term future of Apple Intelligence doesn’t look too promising to me.

FTC: We use income earning auto affiliate links. More.

]]>
https://happyspin.me/apple-intelligence-is-more-of-a-concept-than-a-reality/feed/ 0