PCIe4 is confirmed nogo on x470 (although some hacks may exist, AMD won't provide the bios code). Unless you are upgrading an existing computer, I'd go with the x570.
According to this page: https://developer.nvidia.com/opengl-driver
OpenGL drivers exist for GT730 (and GT1030) cards but not GT710 cards, so something like this should meet the minimum requirements: https://pcpartpicker.com/product/GFTrxr/zotac-video-card-zt7111320l
For a few bucks more this should be twice as powerful (no idea if you would ever see the difference in photoshop): https://pcpartpicker.com/product/s3YWGX/gigabyte-geforce-gt-1030-2gb-silent-low-profile-video-card-gv-n1030sl-2gl
The AMD side seems to be full of complaints about OpenGL drivers, so I might want to avoid that. I'd certainly examine these issues closer if you wanted something a bit beyond a GT1030 card (a range where AMD is clearly better in gaming, although I can't say anything about OpenGL performance).
Beyond this the GT1650 is a brand new card. In normal use it simply isn't competitive with the AMD cards, but is certainly much more powerful (although less than 3 times) than a GT1030 card at twice the price (it also has 4G of RAM instead of 2G, although how Photoshop could possibly run out of 2G is beyond me).
IMPORTANT: There are two types of GT1030 cards out there, ones with GDDR5 (you want this) and ones with DDR4 (which run about half the speed of GDDR5 cards). The price isn't all that significant, and they often try to hide the type of RAM. Look under "effective memory clock" for a good idea which the card has: I noted ~6000GHz for a GDDR5 product and a ~2000GHz for a DDR4 product. The linked card is GDDR5.
-POSTSCRIPT: If you want a video card more powerful than the ones listed (especially an AMD card) you will need to take your power supply into consideration. The linked cards are unlikely to require additional power (the 1030 apparently doesn't even require a direct +12VDC line from the power supply), but going beyond them probably will.
Mining is using the card just like any other use. The only issue is if it was overvolted and undercooled. Plenty of miners were more likely to undervolt GPUs in an effort to use less power for each hash (this shouldn't cause any damage). A bigger issue is how they were packaged: giving them room to cool down wasn't always a priority for miners. If it was just in a PC then I wouldn't worry about it.
To be honest, my biggest worry about a GPU used for mining would be an abused fan. But replacing a fan shouldn't be that big a deal.
I don't think anyone has ever put a through hole resistor on anything that could be called a GPU (presumably post-GeForce256 [or original Radeon for the ATI/AMD line]). Sufficiently large SMT resistors include numbers on them (expect two mantissas plus an exponent). http://www.resistorguide.com/resistor-smd-code/
Can you see broken solder joints on the pads? Plenty of PCBs simply don't include parts, typically for features not included (adding a resistor will only rarely unlock said feature) or for conservative engineering in case such parts are needed. If it shipped without said part, I'd assume it wasn't needed.
Is said resistor the only issue? Resistors are pretty straightforward devices and aren't likely to fail. If the thing was burnt to a crisp, I'd assume that some other device fried it and is waiting to fry the next component. I'd look for failed capacitors first (especially something that might be shorting to ground, and pulling through said resistor).
PS: No. Almost all circuit boards use plenty of different types of resistors. Note that if you have a long row of identical [probably terminating] resistors that have all the same value, I'd expect the missing one is the same.
WCCF Tech seems more an "educated guess" site. Some better questions:
Are they changing the chips at all? I've heard rumors of better memory availability, so that is a cheap possibility for nvidia to add to their lines. There's also the possibility of rejiggering the amount of cores enabled/disabled. If they want to respond to AMD, I'd expect the 166x line makes more sense.
A better question is if/when nvidia makes the jump to 7nm. In 2016, Nvdia introduced the Pascal GP100 (as a datacenter/compute card). In 2017, Nvidia replaced it with Volta. While Turing has replaced Volta for pro rendering and AI work, Volta is still nvidia's card for FP64 HPC work. I'd expect to see a board with significant FP64 made from 7nm before any consumer core (assuming it would include nvidia's pet project hardware raytracers), so who knows. It looks like they skipped TSMC's "7nm" process, but perhaps "7nm+" (with EUV) will be sufficiently cost effective for Nvidia to jump in.
There: that's a quick summation likely to be as accurate as anything you will see on WCCF Tech.
I suspect most people familiar enough with Ohm's law to do this can use a soldering iron. A better question is if the GPU can handle the addition Watts, and apparently AP is willing to make that bet. There are similar mods that really try to cook a Vega56, but that isn't a >$1k board.
I really doubt that AMD cares about the 2950X, and any customers willing to buy it will happily buy a Zen2 threadripper (based on scavenged Rome chips). Of course, AMD could be sufficiently confident in their overpowering lineup that they don't need the halo of a new threadripper. They probably don't break even on that line.
I think we've seen just about everything we are going to see via clockspeeds. It is pretty clear AMD still has 16 core left on the table, but I really doubt they can increase clockspeed. Remember, bulldozer hit maximum clockrates at 32nm and fell from there. Intel has had lousy clockrates on 10nm and will try to make up the lower clockrates with higher IPC with sunny cove. Smaller chips no longer mean higher clockrates, and it takes a lot of work just to match previous clock rates.
I'm also curious just how many 9900KS Intel can sell. They sound carefully binned, and it is a little weird to hear them pulling something like this off coming on the tail of supply issues.
But yes, Intel can release carefully binned 14nm++++++ halo chips to compete with ryzen, and next year they can release 10nm sunny cover (for laptops) that will probably beat any Ryzen laptop, while AMD can still easily release a 16 core Ryzen3000.
It probably won't too much of a difference for a gaming PC before after 7/7.
If you want 244Hz fps or otherwise extreme CPU performance, you will probably want a 5GHz Intel job (an i5K will probably be enough), otherwise you will be limited by your GPU and I'd grab a great deal on either first or second generation Ryzens (Newegg has an outrageous deal on a 2nd gen 8 core, Microcenter has dirt cheap 1st gens). If you want 12 cores or really expect to need 256bit AVX then the new AMD chips are for you, but that is more a streaming thing than for gaming.
If you want 4k@60Hz or 1440@120Hz, I wouldn't expect Navi to get there and you'll probably have to go with Nvidia before or after 7/7. Basically assume that Navi can't out-perform Vega (or not by much) and you choose both a monitor and GPU that fit your needs.
There's little reason to sell the vega56 or 64 once Navi drops. They might want to keep selling Radeon VII to semi-pro types, but there is no reason to drop the price to maintain competitiveness with Navi (folks who buy it won't buy it primarily to play games).
While GDDR6 is expensive, HBM2 is far worse and AMD wants to get away from the stuff for all but high-end use (Jensen Huang talked and talked about how they were moving to HBM for years, and then only uses it for HPC and other datacenter-based systems).
They do plan on selling the 570/580/590 (I'm less sure about 590) as the 650/660 or similar, but that is because those chips+RAM are really good at their price points (a reasonable size on a fairly cheap process, plus inexpensive RAM leads to a board you don't want to stop selling). You can't say that about Vega56/64.
I still expect a Zen 2 APU roughly next year. The first APUs were branded "2200" even though they used zen[no plus], and were roughly a year later than the Zen CPUs. No clue if they will replace the I/O chip or try to make an entire SOC out of half a chiplet. Having one CPU chiplet and one GPU chiplet makes too much sense, perhaps eventually they will make one.
True, but the notebook market might be almost as big (although you don't get the margins). Also I've heard they pushed a lot harder to get nearly 5GHz of clockspeed. A lot of servers might prefer to use even less Watts running along at ~3GHz, but both AMD and Intel are willing to make those tradeoffs for desktop (and the weird server) speed.
Zen is pretty good for laptops, but I don't think they beat Intel.
More likely that AMD has thown off the "underdog" label and now expects to price themselves much closer to Intel's competition? They have the power, people are learning that AMD chips have said power, and now AMD wants (more of) the pricetag that goes along with it?
Didn't they recently brag in an investor's meeting that their margins were increasing (presumably from after the post-cryptoboom crash)?
The low prices on state of the art AMD chips is too good to last. But there still are some outrageous deals on first and second generation zens to be had.
Multi-threaded cinebench can be simply explained by going from 128-wide AVX[2?] to 256-wide AVX2. Oddly enough, this will presumably reduce IPC (as you need half the AVX2 instructions) but should speed things up considerably.
There should be plenty of work to get much higher IPC out of second (or more on zen3 rumors) thread. Getting any more IPC out of existing single thread performance is hard, and pretty much implies zen left quite a bit on the table (which doesn't appear to be true).
I'd expect that Threadripper will require scavenged Rome I/O chips (for 24-32 cores) and I'd be surprised if they would bother designing Rome in such a way that half of it could communicate with the X399. More likely they will cut whatever Rome front ends are used (scavenged/disabling good chips/whatever) to produce the new threadripper.
Intel is claiming 18% IPC improvement on a chip that will be limited to 4.1GHz (10nm), giving it at best the performance of a 4.7GHz chip. AMD's claims can be even harder to justify (they claimed and IPC boost from zen to zen+ that required including faster clocks to justify), but may have the performance in most cases.
Expect the "Ryzen 3000 APU" to be a bigger part of AMD's strategy, if not part of the hypetrain. Breaking into the desktop market makes a lot of noise and shows up in places like this. Breaking into the laptop market shows up on both Main St. and Wall St. Rome seems to be doing the same thing as the zen1 Epyc, only better. If it goes mostly like promised, expect plenty of places that bought zen1 Epyc for testing to buy zen2 [Rome] Epyc for production servers.
In 2011 Intel released Sandy Bridge in 32nm, and clocked it to 3.6GHz (at an IDF a year earlier they showed off one overclocked to 4.9GHz on air cooling). This has been the heart of Intel microarchitectures ever since (with obvious changes in SSE/AVX portions and upcoming major changes with Sunny Cove). Moore's law no longer gives much in the way of clock increases, at least for CPUs (GPUs may get a little more, being limited by overall power instead of local power).
AMD may well have declared the desktop (but not all high-end desktops) AMD's personal property. Just remember that the real market is servers and laptops (that AMD is slowly barging into). We should hear more about the Rome Epyc server, that should be critical to AMD's finances. AMD has already made a push into notebooks with the (one generation behind) APU chips, I can't imagine that notebook manufacturers can ignore them much longer.
The questions that remain on AMD desktop domination are if they can really compete on single thread loads. It looks like they are even closer, but certainly not at the level of the 9900KS. How much that "halo chip" can convince people that their ordinary Intel chip is better is an open question, but as the champion they often get the benefit of the doubt in public opinion (there's and old saying that a boxing contender has to at least knock the champion down to get the win. Single thread loads remain hugely important in real computing, so even that tiny edge might not mean a "knockdown").
I've never had an issue with 2d graphics and Linux. S3 cards, 3dFX, early Radeons, late Radeons, nvidia, AMD chipset "GPUs", you name it, they "just work". If you plan on using Wayland, you might need to take 3d a bit more seriously.
Getting 3d is another story. Most of the time I've seen 25-50% of the performance I'm used to on Windows, although I can only notice this in gaming (which I've learned to do on Windows, although possibly thanks to long ago SSD partitioning decisions leaves me with little room for gaming on Linux).
My guess is "just about anything" and in the <$60 dollar range I'd seriously look for used [note I can't really vouch for this. I've used used $100-$200 cards, but those were "low end gaming" cards.]
The catch is there seems to be a minimum cost to build a graphics card, and it is terribly close to $50. So you get the minimum cost, plus about $10 worth of "GPUness" added to it. Thus the reason I suggested going a used route.
Last I heard, nvidia has always maintained "higher performance" Linux drivers, but have always done so in a "GPL iffy" way that involved nvidia shipping binary blobs of drivers and the user welding this into a suitable kernal, resulting in a non-distributable Linux kernal.
AMD has "free software" drivers, but they don't quite get the framerates that you get in windows. YMMV.
PS: I'm running a somewhat obsolete Linux Mint Debian Edition with KDE with a nvidia GTX560ti (oooold. But probably not available [new] for $60*) and getting GLXgears running 60fps regardless of how I increase the window size. I'm betting it is limited by V-sync and at least getting some acceleration from the GPU.
Note that if you have built in video (Intel or AMD), I'd strongly recommend that. Intel seems to take Linux drivers pretty seriously (even if they can't do GPUs). It should outperform a $60 card, if not serious competition.
I've assumed I'd partition my own SSD drive in a similar way (mostly because I mostly use Linux, but want to use StoreMi for steam games. Finally get "all the steam library" on "SSDish").
Dad is another issue, and this looks like a good plan. I was even wondering if simply using a 512G drive for StoreMI would make sense (as it should fill the 256G more or less) and leave the rest fallow (for SSD performance). I think leaving the OS and minimal extras would leave enough fallow for SSD performance.
Does this have to be manually partitioned? I've been assuming that StoreMI would be ideal for my father (giving him ~3TB of space without worrying about where it is stored. This is a man who appears incapable of saving files anywhere but the desktop even though he has used computers since 1981).
Can you convince Windows that "\user\" is "D:\User"? I think that would direct nearly all his problematic saves to the StoreMI drive. Or maybe I'll just have to go with a big SDD, backups are equally impossible for him.
I think at least some Western Digital drives report the same transfer speed (from plates) on both 5200rpm and 7200rpm. Latency will always be 17% slower, which is the real problem.
Audio, video, and similar media just mean they launch like they would off a hard drive (barely perceptible) and then have zero difference. I'd equally assume that with StoreMI, you can reasonably assume that your heavily used files will be coming off the SSD while anything coming off the HDD will always feel painful (but after that initial hit will be back to SSD speeds).
And don't forget the 5200rpm drive should be more quiet and cool. Depending on the case, they may well be the better drive.
Microsoft can be pretty picky about what goes on "C:" drive (I have a 64GB OS partition*, installing visual studio was both limiting and painful), so I can't see any good reason to install on more than one drive.
You could do the above with NVMe and SATA, but it really looks like once you pay the NVMe premium, you might as well buy the whole thing in NVMe.
About the only good reason to use more than one drive would be a "fast/slow" system, presumably meaning a SSD and a HDD. If you are going the Ryzen route, you might want to have a 256GB drive and/or partition for StoreMI to combine with the HDD, otherwise just have all SSD C: and all HDD D:. Note that this only makes sense for some really big HDDs as the cost of a 3TB HDD is about the same cost as going from a 500GB SSD to a 1TB SSD: unless you really want the extra space, don't bother.
Between the ancient DOS limits of filesystems and using Linux for years, I've gotten into the habit of partitioning all my hard drives. But I can't recommend it anymore, unless you want to use it for some sort of backup scheme (backup the image of C: and the files of D:?). Maybe for a Linux system where you might want to upgrade by wiping out "/" while saving "/home", but for windows I'd want everything on one big hard drive (possibly minus a 256GB partition for StoreMI, I like my storage and would like to accelerate my HDDs).
I checked my local Craiglist and saw the following:
$50 FX6300 Bulldozer+GTX750ti, 16GB DDR3: same lousy CPU (but with a few more cores) plus a "real" GPU. But at half the cost of the windows license, it is hard to beat (doesn't mention OS, might not have it. Avoid without Windows unless you are really into Linux).
$200 3.2GHz i5, RX480, 8GB DDR4, 240G SSD, 2TB HDD, win 10 pro. More or less exactly what a low-end gaming machine would be (before Ryzen), if you can afford it (and the 1080 monitor it will happily run) you will have all the 1080@60Hz gaming and desktop performance you could want. Very good deal, just that the "win 10 pro" designation makes me uneasy (only makes sense if it came with the PC. But it may well have been high enough end when new to warrant the slightly extra expense of "windows pro").
There are also some iffy sub $100 machines that come with monitors. I'd also recommend putting a price filter on it so you don't see people trying to flog their builds at cost or higher.
The huge advantage with a refurb is that Microsoft loves to sell windows licenses tied to the motherboard, so getting a computer plus windows is easy. At least I hope: sometimes you can transfer a license, but I suspect that if you have the sticker+number MS will let you keep using it.
At least one semi-local college had a surplus store that had all the cheap monitors you could want (buy 3 19" 1280x1024 for $45), but that was a few years ago and a run in with the fire marshal may have closed them. I'd still look for "surplus" followed by "used computer stores" for the monitor if necessary.
Amazon.ca (or Amazon.com) often provides counterfeit items (as EVOs) and Samsung is likely tired of it.
Google "Amazon counterfeit" for more than you ever wanted to know about the fiasco. I'll admit that in general I can't feel bothered by cries of "counterfeit goods" while living in a country that offshored nearly all manufacturing (thus often making the difference between "counterfeit" and "legitimate" absolutely impossible to determine without examining the manufacturing contracts), but with Amazon you can often get some badly disguised fakes.
With 4 (or 3) "big" drives I'd recommend RAID 5. With the pairs of drives listed you can still go with RAID1. RAID 5 means "one drive is reserved to correct errors. RAID 1 means "half your drives are reserve to correct errors". With 2 drives, RAID 1 makes sense due to simplicity (with 4 drives RAID1 really doesn't make sense unless you really want to stay with Windows, especially since you can do RAID1 with your motherboard).
Note that with Win10 [pro?], it looks like you can't boot RAID, so you'll put your OS on your 1TB drive (I'll assume that it is already there). Possibly a better idea would be to use FreeNAS or XigmaNAS as your OS, thus giving you the option to use ZFS.
And of course this doesn't get rid of the need for backups. RAID just means higher uptime while your filesystem is working.
Note, I only glanced through the specs and didn't see any red flags (the 120Hz "fake refresh" implies a 60Hz real refresh but doesn't guarantee it). If you want a ton of desktop space and serious immersion, I would do the rest of the research on something like this and come up with something like it.
Of course if you think that 1080 on a 21" monitor is "obviously blocky" then this isn't for you (roughly the same pixel size).
I'd look into all the raspberry pi variants and clones before digging into raven ridge (even though that is hard to beat).
Power will be you biggest concern. Do you want to split power off the main power supply or have your own power supply. And of course with power comes heat. Raven ridge might draw low power compared to most PCs, but that heat has to come out of the case somehow, and expect the case to heat up until it does.
I don't think you want to add an extra monitor. Perhaps a monitor with multiple HDMI inputs that can switch between the two (or more), preferably a secondary monitor for the main computer. You might be stuck with either two keyboards or some sort of KVM switch (although I'd expect it not to handle the "M" portion).
I'm also wondering how you will deal with all the brackets and whatnot? Your own workbench? After hours at work? A local makerlab? Same goes for cables, but I don't consider them so specialized (probably because I'm a computer type and not mechanical).
Or bigger (a 40" TV will actually be cheaper). Expect to do a lot more research when choosing that option.
You might look into a TV. One catch is that they only go down to 40" or so (but are much cheaper at that level). Be very careful of the frequency update (look for something with a "game mode" for consoles), as earlier TVs had 30Hz refresh from HDMI and were a problem (you need 2.1? or higher HDMI or you don't get the bandwidth for 4k@60Hz. Also make sure it can handle chroma 4:4:4 at that level [which will autotrigger a rant from me, but that's what your video card almost certainly spits out]).
Your partslist is private. Also can't check "euro" prices without a country (I think. Or I could just randomly choose between France, Belgium, Sweden and Italy and assume you could buy from any listed. I'm not EU so I'm not sure how it works).
I can't see a good reason for upgrading. If you've seen >>60Hz gaming and loved it, that would be a good reason to do it (I'm somewhat skeptical, and also think that even if the younger set can see an advantage in 144+Hz gaming, this revolution came far too late for my old eyes).
One thing you might want to do is to buy a second (or third) monitor, probably also 22" LED 1080p 60Hz. While eyefininty (or the nvidia name) might be pushing your video card, it will also work well with 2d desktop use (I'd recommend a single additional monitor if desktop use is the primary purpose, but a third makes much more sense for gaming).
Of course, if you are limited to 24" (60cm?), you might have trouble fitting in multiple monitors...
It is dying with rotating drives. You really can't RAID NVMe cards without ultra-expensive hardware, and you really have to care about uptime to want to RAID SATA SSDs.
RAID is still more popular with NAS systems, especially homebrew ones. These tend to use rotating drives for large storage outside your PC, and can thus get away with RAID. Another trick is that ZFS allows "scrubbing" RAID to fix errors as they occur. ZFS only works on Linux & BSD boxes, and often found in homebrew NAS systems (it also takes at least 8G RAM, so isn't found in even the fancy consumer NAS boxes).
The important thing is that RAID is all about uptime (well RAID1,5 & 6 are, RAID0 is all about speed and I'd expect it to slow down SSDs). If something really goes wrong with your system you want real backups (which may well be on a NAS system with RAID).
Note that all this "RAID is dying" is only on the consumer level. RAID is really great for a lot of the issues that the big boys face, and scales well with their level of hardware commitment. Just don't expect it to be something you care about unless you hangout with the uncool kids over on https://www.reddit.com/r/DataHoarder/
Are you planning on filling the drives up? Check the "performance when full", sometimes it can drastically change the performance of a drive.
I remember seeing at least one that implied that the ADATA XPG SX8200 [non-pro] seemed to work better when full (which was probably a miss-print: I think the difference is DRAM and that obviously gives an advantage to the device with the DRAM as opposed to using psuedo-SLC tricks...)
gold or bronze are for efficiency. I'd want 80+ (just to know it has some efficiency), but even bronze isn't really needed.
I'd recommend making sure your power supply is twice your typical power draw (look up benchmarks that more or less approximat your system, then use the "loaded" power level for gaming and "idle" for surfing. Then buy a power supply that supplies twice that). Also put the build in pcpartpicker to check the wattage. If you buy it mainly for surfing and then double the "idle" levels, you might get a number less than the maximum power draw: if so then make sure the power supply can exceed whatever pcpartpicker says the draw is.
Also stick to good name brands. Only two companies make CPUs. Only two companies make GPUs. Only two companies make [spinning] hard drives (maybe 3, I can't keep track of who is really independent: you just don't have to worry about fly-by-night spinning hard drives). Power supplies, SSD, and to some degree DRAM are right open and I'd recommend being cautions about brand names (granted you can go wrong with the brand name building a bad video card around a well known GPU, but power supplies are wide open). Corsair and Seasonic should last 4+ years (I'm sure others, but some otherwise good brands can be hit or miss).
Said to be from the AMD shareholder meeting. I can't imagine what the fourth product would be (Threadripper 2? Both threadrippers were delayed at least a quarter. ALU? That will take another mask however they make it. Ryzen3000 ALU (using Zen+) has basically been launched. Maybe they are building a mobile modem now that Intel has bowed out of the scene :)).
If "Navi's" performance can justify the VII+ name, I can't imagine how you can call it just another "Vega refresh". Vega really can't supply that performance with any reasonable amount of bandwidth we can expect from VII+/Navi.
Somewhat cheaper, and plenty of people gripe at seagates:
But I'd really pay the extra $3 and get an extra TB:
Note that the Seagate mentioned (ST2000DM006) runs the same price as the 3TB Hitachi ($60). Backblaze seemed to do well with that model of Seagate (certainly better for any WD when they were still buying 2TB), but I've never heard of a bad run of Hitachi drives.
I seem to recall the Q6600 being a legendary overclocker. I'd also guess that you didn't push it or it wouldn't have survived this long. At the speeds it was capable of, it would probably run with the first gen R3 ryzens.
Not only that, but the 3TB (which happens to cost the same as a 2TB, at least right now on pcpartpicker...) HDD is ~$50, or roughly the cost to go from 500GB to 1TB SSD. I still favor HDDs, but that is enough to send many people into the "I'll just take a big NVMe Flash drive and forget there was ever magnetic media".
Of course, video can take a lot of room, so you can justify that 3TB HDD by staying with the 512GB (or smaller) drive.
One thing to note before buying a big SSD: check the "when full" benchmarks. If you are reducing your total amount of storage, I'd be even more careful about how the SSD acts when full. A bunch of "name brand" TLC/QLC NVMe drives work fine, until you actually use them for your storage and they slow waaay down.
PS: with your original strategy, the crucial 240GB crucial 500
is the lowest I'd go for a SSD. ~$26 for 240GB is hard to beat, and this comes from Crucial, not some cheap reseller. Make sure you look at the 3TB drives as well, those things are within a dollar of the same brands 2TB. $75 for a working strategy, and probably enough to make sure you can swing the 16GB and whatever pluses you need...
Good question what a "new architecture" means in GPU circles. Certainly going from VLIW to NGC was a clear architectural change, but beyond that is isn't always clear. I'm fairly sure that it isn't the "architecture" (mostly visible in block diagrams of fetch/decode/execute/etc in all their parallel glory) that is causing bandwidth limitations as much as the whole I/O interface and compression algorithms they are using for all that data.
Can't do anything about the cost of GDDR6. You need the bandwidth, it is either that or HBM. I have to wonder about the price of the cooling: it seems a minor issue, but nvidia avoids it on the low end by having the 1650 draw almost no power and the 2080[ti or not] cost so much that any cooling (and onboard power supply) cost is a drop in the bucket. AMD claims to be increasing power efficiency (and it appears to be true) but much like the Red Queen, they have to run even faster just to keep up with nvidia.
I strongly suspect that "doing certain PS4 games at 4k" is part of the "Navi spec" Sony has for AMD. They had better come up with a way to do that by 2020, but I doubt that many gamers will be happy with 4k resolution with 2019 Navi (I'm really looking at buying a ~40" 4k TV and using it as a monitor. Mostly for 2d/desktop work, but I like my gaming to not be scaled down. I doubt that Navi will get the job done.)
AMD's CPU division has a clear vision of the future. The GPU division? Might as well be stuck in Wonderland, but they are likely well supported by the console division. Perhaps they do have a clear vision and that ALUs are their primary focus with professional cards the secondary one (consumer grade cards appear an afterthought. But they are also an afterthought for nvidia).
All of this points to a "mid level" Navi being constrained to Vega64 level performance (assuming 256 bit GDDR6). For an "enthusiast Navi", things get more difficult: you need either HMB2 or a 512 bit wide GDDR6, and a VII-level core. This isn't expected to get much more than a VII.
Presumably whatever Sony has been paying AMD to develop the PS5 will pay off in terms of some bandwidth optimization: I can't see Sony paying for HMB2 for the PS5 (unless they have some sort of phone-style integration that drops the price of such things). Even then, I'd expect AMD to need both bandwidth efficiency and some sort of power/area efficiency to get a 2020 Navi die to be competitive with the RTX2080 and above (and expect Nvidia to be launching 7nm cards once TSMC to have 7nm at reasonable cost, so any leapfrog of Turing should be short lived).
Basically here is the issue (~2018 financial revenue)
Until recently, AMD was obviously focusing as much of their engineering budget as they could becoming competitive with Intel. They succeeded, but still have to keep running to maintain that. There isn't much left over to try to achieve competitiveness with nvidia (even though they seemed mostly focused on datacenters leaving consumer graphics an afterthought). I still suspect that the PC GPU designers should make great strides helping themselves to circuits that Sony paid to have designed.
Assuming the Navi, Vega64, and RTX2060 all perform similarly (don't expect to be doing much raytracing with a 2060), I'd assume that the Navi will be significantly cheaper than the rest. Unfortunately it may take awhile to undercut the 2060, but I suspect it will have to fall to at least 1660Ti prices (while easily outperforming it) eventually.
Gamers will happily pay a significant tax to buy Nvidia over AMD, and I see nothing changing that fact.
The Vega64 (Vega10) didn't have all that FP64 support, but otherwise was similar with 4 more compute units (64). I still think that AMD can get similar performance with only 32 compute units, with (hopefully) similar bandwidth levels (256 bits @ 14Gb/s vs. 2048 bits @1.89 Gb/s). Exceeding Vega64 performance with 256 bit wide GDDR6 is almost impossible (for AMD, nvidia doesn't seem to have such issues).
I could have sworn that AMD threw even more compute power into Radeon VII (other than the FP64, of course).
Are they still producing Radeon VII? I figured they'd just meet demand and quietly bow out. If they are still producing I'd start to think that they are at least breaking even.
Assuming naive scaling from the frequency going from 8Gb/s to 14Gb/s, we might get a GDDR6 powered RX580 board (assuming it scales with bandwidth) to Vega 64 levels ("userbenchmark" shows about a ~70% speed advantage of the Vega over the 580). Don't expect miracles (anything Sony paid for is likely going into the PS5 first) to somehow make more use of bandwidth (like Nvidia does).
Expect this "new improved" bandwidth to require something like a Vega64 core to power it. This would be roughly half the cores on VII (Vega20), although it should be less than half the area (leaving out fancy Vega20 things like fp64 and whatnot). This would allow it to be clocked at Vega/VII like levels and still use nearly half the power (or more likely the RX580's power level - still over 200W).
Power consumption is whatever AMD feels they can get away with: their chips seem to scale with power consumed. Its pretty pointless to speculate on what it will need, they might want to match the somewhat more than 200W of RX580, or they might feel the need (performance wise) to go to the nearly 300W of Vega.
I was hoping for more, but expect the card to be strongly limited by what the GDDR6 interface can provide (certainly less than the HBM2 of Vega and VII). Until AMD can do something about that, nvidia can do things like waste 20% of the chip on pet projects and still own the market.
Without the power lines it won't clock at all (and simply be slightly more powerful than a RX590). Not to mention that unless they have enough chips that can't even be used as Vega56s, it costs either as much as a Vega56 or Vega64 (if yields are high enough) to make.
They need something that looks more like a Polaris (possibly with either Vega cores or whatever they are calling "Navi" now) only with GDDR6 (and enough cores to justify the increased bandwidth). That's almost certainly what "Navi" is (although how much bandwidth next years "performance Navi" will need and how they will provide it is an open question).
It might drop a bit, but I'd guess that production stopped long enough ago not to bother. I'd also assume that all 1000 series chips will be sold out (you can't beat their prices now).
I was pondering how much to build a PC for my dad, came up with this.
WARNING requires access to a microcenter, and willingness to find/buy an open box motherboard.
PCPartPicker Part List
As you might expect, replace the CPU price with ~$80 from microcenter...
The motherboard (open box) has a similar price from them
Inland Pro should likely be Inland Premium, but haven't done the full research. Note that this is a "for dad", who can't be bothered to move the data around (and values all data equally, even the spam), so the 256GB NVMe SSD is basically a cache for the 3TB rotating rust drive, not a manually chosen "OS/important app" storage (which would almost certainly be 512GB or more)... StoreMi [free] only works with 256GB loads.
Optical? I had a hard time prying zipdisks away from dear old dad. No reason to try with optical (no click of death to worry about).
Monitor? As you might guess, dad is old. He didn't have a trouble with a blurry CRT monitor, no way he can resolve HD on a 32" (in fact I'll probably just get a HD TV).
Still, that's pretty high. Not too sure what to change.
No, really. It makes more sense to judge chips individually than to say you prefer AMD or Intel, and limit yourself to those chips.
For less than 4 cores, Intel tends to be far better, if only because AMD doesn't make chips with less than 4 cores (they disable cores for the Athlon 200GE). Just remember that if you live near a Microcenter, that $80 R6 (1800?) will cream any pentium or celeron (of course you'll have to supply a GPU regardless of whether the motherboard insists it has graphics present).
Once you get into the mid-range, you have to balance the higher cost and lower maximum throughput of Intel vs. the higher maximum clock rate. I wouldn't consider an Intel unless I planned to overclock it (thus forcing me to use a "K" chip).
At the high end, things get really different with the Intel having large number of highly clocked cores in the i9 and AMD having an extreme amount of reasonably high clocked cores in the threadripper. While you should always buy the hardware for the software you wish to run, at this level it becomes a lot more obvious which ones work better with each program (or not. It takes a lot of parallelism to justify an i9: I'd expect any reason to use an i9 over an i7 would imply that a threadripper will do a better job than the i9. I suspect the majority of the times people would rather just throw money at an easy solution than try to figure out which they need.
I may like AMD as a company, but I really doubt that I will ever need much more than 8 cores (and 8 thread) and that clockspeed matters more than the additional cores and threads. Still, the lower price of AMD is likely to keep me buying an AMD when the Ryzen 3000 comes out (although I can't imagine Navi will have a place in such a computer, this year anyway. Nvidia simply owns everything above 1440).
Another option would be using an Optane NMVe for that scratch space. Looks like Optane runs about 1/3 the price of standard memory (registered DDR4 sounds scary) while working well enough in most "huge" loads. Considering the cost of the threadripper, I'd assume you would max out the memory and then want some seriously fast swap space (the new DDR4 optane is Xeon only and requires custom re-coding).
Depends what you are talking about. Nearly all talk about Sunny/Snowy Cove (the first architecture significantly different since Sandy Bridge) it all seems to be based on 10nm.
10nm is something that Intel has been saying "real soon now" for far too long. I'm fairly sure they will get something out next year and call it "10nm" (they technically did it this year, but not for any significant shipments), but whether Sunny Cove will like a significant change in "10nm" is another story.
It is even possible that they will have to skip 10nm, leaving an open question as to whether we will see the Sunny Cove architecture on 14nm or 7nm first. Intel having this much trouble on a node is something pretty new (although 14nm was delayed as well).
We'll almost certainly see yet another lake this year, but don't count on it being significantly different than any other lake. They are already shipping "lakes with more cores", so it isn't like they have much options unless they have been backporting sunny cove for several years (reportedly a team may have started that with Jim Keller's hiring).
New AMDs are almost certainly coming soon (AMD is tipping their hand by a firesale of both CPUs and GPUs). No clue about Intel (who are unlikely to have such a sale even when they are good and ready to ship the next chip).