IPC gains would have to be from memory (or cache pre-fetching improvements) the cores are supposed to be the same. As far as I can tell, IPC of Zen is already matching Intel with the obvious exception of Intel's wider AVX instructions (assuming you just measure performance instead of directly counting instructions, because wider "media" instructions will get get the job done with less instructions). Generally AMD makes up the difference (and then some) with more cores and threading enabled (typically anything that uses AVX instructions is thread friendly).
I'm expecting the clockspeed to make up half the difference between an overclocked Zen and an overclocked Coffeelake. The only issue left is whether or not more cores or enabled threading (i5K vs. Ryzen 5) can make up for the difference in top clockspeed (I suspect 6 cores is "enough", and AMD may have to drop the Ryzen 7 (or not, don't ask me to predict people's choices).
Generally speaking, CPUs should be pretty "future proof" these days:
now vs. 5 years ago: https://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review
now vs. 10 years ago: https://www.anandtech.com/show/10525/ten-year-anniversary-of-core-2-duo-and-conroe-moores-law-is-dead-long-live-moores-law/
The main improvement per core to Intel chips in the last 5 years may have been slightly wider AVX units, but this isn't going much further (Xeon chips have even larger ones that immediately cut the clock rate nearly in half. You have to be very careful which size your compiler will use), so "single threaded performance" isn't going anywhere.
The obvious improvement is more cores. Coffee Lake have 6 cores a pop (and Ryzen can have more), so this is fine (and probably not going anywhere soon).
So the only thing you have to worry about is not being able to supply enough threads in the future to deal with the rare program that wants more than 6 threads. In this case I'd strongly recommend considering the Ryzen 5 (presumably after reviews on Thursday): for the cost of the heatsink (that the 8400 will need and the Ryzen 5 shouldn't) you will get 6 more threads (which might net you 40% more performance going flat out, no more). I'd definitely go with the Ryzen if you are overclocking (they are unlocked) and they should hit the turbo speed of the 8400 (hopefully confirmed on Thursday).
But you really can't go wrong with future proofing with either a coffeelake CPU or a Ryzen+ CPU. Just make sure your motherboard has at least 4 DRAM slots (two pairs) so you can add more ram after DDR4 prices finally come down to Earth. Also make sure you have NVMe slots and don't fill them with a SATA drive (lots of M2 drives take a SATA port: just buy a 2.5" instead). You may be thankful later.
GPUs are another story. So far, they have obeyed Denard scaling (often mistaken for Moore's law) now even Moore's law appears dead (transistors are only getting cheaper by going 3d in flash memories). So don't expect nVidia to sell you more transistors for the same money (at least pre-mining-bubble).
GPU buying isn't about "future proof". It is about dealing with the mining bubble. I'd also like to imagine that future GPUs (and future VR headsets) will at least attempt to work together, but so far no such "VR-synch" has been promoted by either company (they have a few VR-tweaks, but nothing that requires headset and GPU manufacturers to work together).
Track down Ryzens overclocked to new speed: expect identical speed (not many hit 4.3GHz).
Newegg is taking pre-orders as of now (4/13, before 12:00pm EDT). Data copypasted from arstechnica, but appears to match newegg's prices.
Model Cores/Threads Clock base/boost/GHz TDP/W Cooler Price
Ryzen 7 2700X 8/16 3.7/4.3 105 Wraith Prism (LED) $329
Ryzen 7 2700 8/16 3.2/4.1 65 Wraith Spire (LED) $299
Ryzen 5 2600X 6/12 3.6/4.2 95 Wraith Spire $229
Ryzen 5 2600 6/12 3.4/3.9 65 Wraith Stealth $199
So since last year AMD has made up quite a bit of its missing clockspeed and single thread performance, while Intel has increased the core count by 50%. I suspect at these prices the i58600k (once overclocked) is going to be hard to beat, and prices will have to adjust. Newegg doesn't seem to sell them without heatsinks, I wonder if that will ever be an option (or maybe AMD got that good a deal on them).
But you are unlikely to go wrong with the AMD choice. AMD hasn't been this strong since Athlon64.
I don't think you can add that type of thing to a console. It would be a nice feature, but I think Sony would have to include it in stock.
If you have a pair of headphones with a known shape, you could at least play around with equalizer software to find whatever shape you liked/wanted. Then look for that shape for both your PC and Console.
Some of the issues are that "remastering" isn't clear how it would improve a game.
Star Raiders [for the Atari 400/800 1979] was essentially that platform's "killer app" (a term that wouldn't come into existence for a few more years when Visicalc was written). I saw that a remastered (by Infrogames) was on sale, but no. Just no.
M.U.L.E needs a comeback.
Decathlon (Microsoft's one classic addition to the gaming pantheon before they bought Flight Simulator from SubLogic) has been reborn several times (including the arcade game "Track and Field"). Each time it breaks hardware.
Robot War has been remastered (and the ISA updated) and is probably at maximum popularity (it is a weird game). Zachtronics could presumably do the best with the game (see TIS-100)
I wonder if you could even do a "remastered" dragon's lair that uses screen captures to texture a 3d-world. You'd have to closely match the inputs, and there really isn't much to the gameplay.
Atari Miniature Golf  would probably port wonderfully to tablets/phones. Perhaps I should get on to it.
Wing Commander : oops.
Civilization is hit or miss on the remakes. I'm sticking with 4, although it is great that Sid is still at it. Maybe another Pirates!
I liked Test Drive, I think it got run over by Need for Speed (I downloaded the Test Drive 5 demo and laughed. I don't think there was a Test Drive 6). NFS seems to have run off the road. A NFS5:Porsche Unleashed style NFS would be wonderful.
No One Lives Forever (1&2) were great. I think they threw away the series with the third installment (not even sure it was released. I think the IP still exists (and since the whole idea was to "kinda/sorta copy the 007 IP", actual licensing may not be the best way to revive it.
I think there have been Serious Sam remasterings. Considering how much I played that game when it was new and how little I'm sure about the games I own in my steam library (it has SS1-3) shows how well "remastering old games" really works.
The destruction of Baldur's Gate source files seems a crime. That said, there was a recent sale on Android (which seems a good home for it now).
A lot of this is thanks to rose colored glasses. I think the Serious Sam approach (make a whole new game capturing old gameplay) worked better, and extending Serious Sam even further hasn't worked.
I'm missing the problem: can't you just download equalizer software and adjust the shape to your taste? It certainly sounds easier than matching headphones/speakers to unknown tastes.
No idea. I just googled the software and look at the site. Looks fine to me (and if the RAM is in 4 separate sticks you can pull 2 out to test how a Ryzen would react to 8GB). I'd ask an actual user before buying the software (and also telling them what resolution of video you are planning to edit).
The motherboard can take 6 drives (including SSD and optical). It has two "SATA Express" ports that should presumably be used for SSDs (although I wouldn't worry if your other [rotating]HDD is plugged into one), so this shouldn't be a problem
The case has room for 3 3.5" (spinning) and 2 2.5" (SSD) drives, also not a problem.
Do you have one of the drives installed? If so you should already understand the issues: the SSD is fast, the HDD is slow. Dumping all your bulk media and rarely used files should help, but don't expect it to work like your SSD.
I'm less impressed with the idea of a 7200 RPM drive, mostly because they are louder and more power hungry (although you won't hear it over your heatsink fan, especially if you overclock). You can take it if it isn't more expensive than a 5200 RPM drive, but don't expect to notice the difference (the goal is to never have performance critical files on the HDD).
I'd recommend just heading over to the storage section and either sorting by "cost per GB" or sliding the minimum size over to a TB or two and sorting by price and then ignoring any brand you are prejudiced against (I suspect Seagates are most likely to fail). The following popped up and is a great deal if it is still at $68 (especially if you are remotely concerned with the other 2TB, maybe not):
As far as adding it, the only precaution is to turn the machine off when adding it. You might need to tell BIOS (or whatever it is called now, the setup screen) after you do it. Hopefully your machine will have a spare one of those twisty dohickeys (no idea what they are called, I'm not picky about my cases and my latest case has them, but not for all bays), otherwise you will need 4 screws to mount the thing (same size as the rest of the "computer screws"). Just be patient and don't force anything inside the computer (with 10 SATA power connectors, I'm pretty sure one will reach). Once the BIOS and the OS agree the drive is there, format it and start transferring large and rarely used files to it.
Second question: you can overclock the CPU (it has the all important "K" suffix). "Safely" is unbelievably hard to quantify, but the best idea is set hard limits on input voltage and temperature (ideally you should stay to Intel's published limits, but that would likely be similar to just turbo mode). That said, the 6600k is said to hit 4.5GHz pretty often, although expect ~77C temperatures on your heatsink (I'd suggest dialing it back a bit, 4.2GHz might be a better goal). Look up a good guide for this type of thing, install a temperature monitor on your desktop, and gradually ramp the specs up to a rate you are comfortable with.
The website suggests: 2GHz and recommends multicore for HD and 8 cores for 4K. I suspect it will be fine for HD but not 4k.
It also claims a minimum ram of 4G, 8G recommended and 16G for 4K. This is likely more important (although exceeding this is unlikely to help).
It isn't what you would want to buy new, but video editing should be something it is almost at good as a i5. A Ryzen 5 (with enough RAM) shouldn't have any limitations. But be careful of the RAM limitations before committing to a DDR4 machine (if the CPU is enough, it might make more sense to buy DDR3 and sit tight while DDR4 drops and Ryzens adjust to the Intel competition).
Except that a "Frontier" edition is simply an overclocked Vega64 which had an MSRP of roughly half that price. Wake me up when they can get a Vega64 or better yet a Vega54 down to MSRP prices (not counting on it thanks to HBM2 issues, but maybe they will solve these eventually).
Ye gads. At least learn the basics of macroeconomics.
Supply: GPUs are basically limited by the supply of GDDR5 (or HBM2 in the case of Vega and TitanV).
Demand: The miners and gamers. Each have very different demand curves (see below).
The traditional explanation of "supply and demand" involves elastic demand, where buyers either buy or not buy depending on how much the want an item (gamers match this). Miners are a completely different type of demand: they only want the product for the cryptocoins they produce.
A miner will pay any price for a GPU that will bring "sufficient" profit (where "sufficient" depends on the miner's time, price of electricity, and how close they are to maxing their lines of credit, wife acceptance factor, etc). A miner isn't going to pay a price that won't pull a profit (of course, there will always be greedy fools who will snatch up cards as they drop assuming the good days "will come back").
The moment all gpu useful crypto-coins cease to be profitable to miners, a two-stage response happens. First they (well, at least the rational ones) stop buying GPUs. Eventually, they start selling GPUs as well. How this plays out has a lot to do with how quickly the cryptocoin markets (all of them, or at least the ones less suitable to ASICS) crash.
Gamers, on the other hand have a traditional elastic demand. Some are willing to pay as much as miners, and are the ones who have been buying cards the past year+. Some are willing to buy the formerly mining cards, other will pass and buy new. But what we will see is the whole market thrown into chaos with the old inelastic demand replaced with the even older (pre cryptocoin) elastic market. Expect supply to greatly exceed demand and some real deals should be available (although almost entirely in formerly mining cards).
Note that the supply of new cards isn't changing, or nVidia and AMD would have made bank while producing these cards. A wafer of GPU chips takes months to build, and the lead times are long. Unless nVidia and AMD are willing to eat the cost of a GPU chip (and they typically are: the business model involved typically cripples most of the output to create artificial scarcity and product segmentation) to avoid buying GDDR5 or HBM2, those boards are being made (and there's only so many different boards you can populate with each GPU-type).
You'll note that the price of RAM didn't show up in the entire discussion. The only time it comes up is in what mix of GPUs nVidia and AMD want to produce 6 months (or whatever the lead time is) from now (and how few GPUs they are willing to produce in such conditions). You might see a different mix of 3G/6G 1060 or something, but that's about it. I wouldn't be that surprised if nVidia was sitting on a pile of GP104s (the chip needed for 1080s) because there isn't enough GDDR5, but it really doesn't change the final price (AMD can simply ask GloFo to produce Zepplins* instead and make a bunch of Ryzen/Epyc chips. Wonder of wonders, AMD is back selling CPUs).
Ancient games and a 560Ti.
Kerbal Space Program is a Unity game and mostly CPU limited (physics is a big part of a game based on rocket science).
Lord of the Rings Online is from at least 10 years back and a 560ti can play (Gondor and later give some issues) at 1920x1080.
And more on the tablet:
Spaceward Ho! is a 1990 Mac game ported to Android.
Icewind Dale/Baldur's Gate are from 2000ish and ported to Android (and were on sale recently).
Even if it was the problem, I wouldn't expect it to damage the PSU (it didn't hurt the old one) and there is almost no way it could have hurt the motherboard (I have had one that prevented the motherboard from booting. Eventually discovered that I could boot then plug in the hard drive (extremely not recommended) and pull the data off the drive. I'm reasonably sure nothing damaged the motherboard.
The biggest issue would be the life expectancy of the hard drive. I'd be even more concerned with backups than before.
I haven't seen one of those in a long, long time. And I'd recommend checking for a router first (most broadband carriers provide a router during installation nowadays) which means they will be that much harder to find (unless you have a crimper or know somebody who does, then its trivial to whip one up). Don't forget a crossover cable is limited to 10T ethernet, it is going to be slow.
"take out the one he currently uses" would mean that he would be using the OS from the old drive and windows would throw a licensing fit.
Assuming your computer is a desktop and can accept a second drive, you will need both drives installed in the computer in order to boot (without windows complaints). If your computer is a laptop, all-in-one, or otherwise can't take a second drive you will need some other method of adding a hard drive (USB 2.5" hard drive holders are pretty cheap, 3.5" not that much more [they have to supply more power than the USB cable provides]).
It shouldn't be that hard to network the computers together to avoid the USB caddy. I'd expect that somewhere from broadband to either computer there is a [wifi?] router that can accept at least 4 ethernet ports, and then plug two computers into one and set up a network (retail ethernet cables are wildly overpriced: try to scrounge or borrow one if this is a problem). A network should be faster than most USB caddys, but will require moving both computers to the same location.
I think you can keep Steam Libraries on multiple disks, and if you are mainly concerned about games I would look there first. Windows is awfully picky about insisting that certain files must always be on the C: drive, but I hope steam works better than that (my Steam library isn't on the C:/ drive, but is all on one drive).
I'd be curious why the new drive isn't much larger than the old one (in the case of the old one rotating and the new one being an SSD I'd want the SSD as the C:/ drive (but Windows makes this a real pain to do in practice with an existing system). If the game in question is a steam game I'd just move the whole library (just using windows explorer) over to the new drive and tell Steam that is the new drive and verify all the games/applications.
There's a bit of a catch with the ryzen L3: four cores share an L3. If there is an L3 miss, it can take some time to access the "other" L3s [and it is a lot of time compared to the Intel competition, but not quite as bad as going to DRAM]. Bulldozer didn't have this problem as it only had to deal with 4 independent accesses of L3 cache (the core pairs could be limited by the scheduler to avoid interference) and wasn't used for multi-chip monster servers like Epyc (and Threadripper).
In general this doesn't matter, but if you really have 16 threads (or more) all trying to use the same hot spots in memory you are going to want to spend the extra arm and a leg for Xeon.
But don't buy a bulldozer unless you have extremely specific needs an know the programs you are running on it just happen to work well on a bulldozer (and not a similarly priced Pentium). In nearly all cases (including 8 threaded code), the Pentium will win (not to mention that there are some pretty cheap ryzens out there as well, just be careful about the difference between DDR3/DDR4, especially when comparing various Pentiums).
Obviously the question becomes : what competition is driving all the top-of-the-line-gpu computing chips? I think they've released 1 and announced another since they launched the 1080. Is google really that much a threat (on machine learning/AI only, not supercomputing).
They are designing new architectures and shipping them. They just don't need to ship consumer GPUs.
I had completely forgot that Hardocp basically adjusts the image quality to get ~60Hz or so out of their games, so look up the settings they had. They also didn't overclock the 1080 at all (it was a founders edition, so assume stock clocks), but I can't imagine doubling the framerate.
Even then, I wouldn't bother trying to get 144Hz framerates. You can waste electricity all you want in old games, but I doubt you will see any difference over 100Hz (or probably even 70Hz). I can see the appeal of being able to display every frame that comes out of a 1080, and for all I know 144Hz is cheaper than nsync.
Great combination by the way. One thing I recommend is a monitor 'you will like to spend time looking at', because that is what you will be doing while on the computer. The 1080 is icing on the cake, keeping games pushing well beyond the God monitor's lesser competition (if not maxing out the God monitor).
What is the goal? I peeked at a 2016 review of a 1080 and saw that it averaged 73FPS and "maxed out" at 114FPS in Fallout 4, so don't expect to push 144Hz at anything that isn't pretty old. That said, I'm not sure you can see the difference between 73FPS and 144 (the minimum was 45fps, which you probably could see).
That said, I can't see the logic of paying an additional $300+ (the cost of adding "ti" to a 1080) to justify a $350 monitor (that likely still won't hit 144Hz, but probably beat 70Hz or whatever a lesser monitor would show). That all depends on the software you want to run.
Assuming the NAS has spinning rust on one side OR ethernet on the other, I'm fairly sure that an i3 can keep up with it. You haven't mentioned the software (or memory): ZFS is said to eat 8GB on its own, Windows might need that (depending on the edition), and Linux should go well under with a non-ZFS filesystem.
Obviously a sufficiently large number of users can overwhelm a NAS, but I'm guessing you will have to add multiple gigabit ethernet ports before changing the CPU... (I'm assuming a Sandy Bridge compatible motherboard doesn't include anything faster than gigabit ethernet).
Just don't edit the CAD on the server (without a local copy/cache). While this sounds like a joke, I actually worked at a place that did just that in the 1990s (and the CAD department ground to a halt). The answer was source revision control, not throwing CPU power at the NAS.
Rumors have been swirling about a "11xx" series card, but Jen-Hsun only talked about the AI/server-sized GPUs at the GTC (GPU tech conference). I'd say it isn't happening soon (they could still announce something, but I wouldn't get my hopes up).
I suspect that the announcement of "bitmain" "GPU replacement miner" was rushed to allow shorting AMD while investors were already being driven into panic by other attacks. It is too early to see when it will ship, how many coins it can mine and what current GPU miners will do. Mining ASICS and stock analysis are both pretty shady jobs, I'd wait till someone more reputable gets their hands on "shipping hardware" before trusting this information.
I don't there is that much left to mine with GPUs, but until the miners give up, the cards aren't coming down.
The software you (or your brother) plan on running is the key, never the rest of the hardware (beyond physical compatibility).
[some] data was pulled off of hard drive discs recovered from the Columbia Space Shuttle wreckage. It depends on how much you are willing to pay, although the actual magnetic data is pretty much right on the edge of destruction anyway (it is pretty hard to damage the direct magnetic spin, but once you do recovery is impossible).
Typically you swap disks with a compatible drive inside a clean room. If that isn't available expect things to get even more expensive. Just looking at the prices for the "swap disks in a clean room" should make everyone update their backups (you do have a current backup, don't you?).
Oops. Just checked some TitanV benchmarks (that hadn't been published when I wrote the above), and they are clearly better than the 1080ti. I'm curious why, presumably the enormous wad of silicon has room more pipelines than the 1080ti and still has room for all the AI and scientific processing.
I'm pretty sure the AI bits don't take up much room/power and will likely be merely disabled on the non Titan 11xx series hardware. The scientific [double precision] work takes up around 4 times the room as normal multipliers (just the math part, moving the bits is another story) and is the bigger shock.
A better question is why isn't AMD producing as many VEGA and/or 580s as they can? Even if they are having HBM issues, they could still be producing 580s (or at least to the limit of GDDR5). AMD and Nvidia appear not to be betting on an extended cryptocoin bubble (or at least not going to heroic lengths to score more GDDR5), but that isn't to say it won't last, just that the big boys don't trust it at all.
Titan V is great if you care about machine learning (or possibly just use float16 instead of floats for graphics) or scientific calculations (and don't care about ECC. Not a whole lot of overlap between the two but you can look up what UVA did with their non-ECC Apple Beowulf).
It looks more and more like the ultimate CUDA developer card, but has little other use than as "I'm rich" hardware. I could be wrong, but I suspect that you would be better off finding older drivers that let you SLI two 1080tis for gaming.
If the SSD is the C: drive, wouldn't windows (especially a 2008 windows setup) automatically do this? Setting "inital and maximum" size to the same might help things a bit, as is making sure the file is on the SSD (critical if the SSD isn't the C: drive, which might not have been the case in 2008.
Another thing to worry about is drive endurance. This might not be the smartest thing to do to a 2008 drive, or even a slightly older drive. On the other hand, I've been beating away at my old 96GB [i.e. very old] SSD in exactly the same way (and was using 4GB not that many years ago) and it hasn't given me any trouble.
Yes, in ye olden days we kept our virtual ram on spinning rust. And we were thankful.
I strongly suspect that the clock distribution circuits have been fixed, leaked frequencies would be rather unlikely with the old clock circuits (they really didn't like going over 3.8GHz). This wouldn't count as "microarchitectural".
If you see frequencies (especially turbo speed) going significantly over 4GHz (4.2-4.3 turbo speeds leaked). That isn't much more than a 10% increase [at the very top end], but clockspeed is pretty much the only bit where Ryzen is lacking.
The question is the same as "why have a GPU at all?" Why not use your CPU for all your rendering and shading needs?
CPUs are designed to run x86 (or ARM, Power, whatever) code. That instruction set was developed in the dawn of time for the 8086 and is backward compatible with 8080 assembler. Basically all CPUs are built on the "Von Nuemann" principle of taking an instruction from memory and executing it, one instruction at a time (Itaniums advance this to three instructions. It was a bad idea). Nearly all the space and power of a CPU is designed to make this go fast by figuring out which instructions can be executed ahead of time, how many can be done at once (up to about 3 or 4), and all kinds of issues about figuring out which are going to be done next (branch prediction).
GPUs, on the other hand, simply execute thousands of instructions at once. They are mainly limited by how much power it takes to do this simple instruction, and the difficulty of moving all that data around the chip.
If the programmer can find thousands of identical things to do all the time, then the program should be run on the GPU (there's also the issue that all those instructions have to only need a few megs of outside data, and while GPUs have much more bandwidth, it is peanuts compared to how much more compute power they have).
CPUs do try to be able to do "GPU like" things, and the extensions are called AVX. The big catch is that as far as I know, any memory load to AVX has to load 256-512 bytes all at once. GPUs can do individual lookups for tiny (32-256) memory operations and this is almost certainly critical for the SHA256 operation needed for bitmining.
I'm not sure the difference between bitcoin (that uses custom chips) and etherium (which uses GPUs), but I suspect that bitcoin is rapidly changing the last bytes needed for a "magic hash" and etherium has to run a lot more data through the hash each time.
It should also be obvious that any operation that would make you want a faster CPU and that can be moved to the GPU should be moved to the GPU. But in most cases this is obviously impossible and the rest it is exceedingly difficult. But if you can find something that can be done that way go for it.
Way back when Moore's law was created, somebody also pointed out the "wheel of karma" that closely matched this type of thing. People want a new form of graphics (text only, simple graphics, sprites, GUIs, 3d, VR) and CPUs aren't up to it. So graphics cards (or the whole computer) includes the fancy new graphics hardware. Sooner or later the CPUs catch up and the hardware disappears and the CPU controls the show. Then hardware for the "next new thing" appears and the cycle begins anew. But Moore's law isn't improving CPUs anymore (and it isn't clear that more transistors could speed up a CPU), so the cycle is ending (if you haven't figured it by now, I'm sorta cheering the GPUs on to replace the CPU, but that is a very long shot (and they would end up looking a lot more like CPUs than they do now)).
I guess it all depends on what you are falling back on. You can surf pcpartpicker.com on just the built-in graphics, but even that might get annoying for the odd strategy fix. Two issues:
You aren't going to get any more for you card by waiting. If you aren't using it and plan to sell it, you might as well pull the trigger.
It might be a looooong wait for nVidia. They seem to like to update their AI/Supercomputer platforms but not their consumer graphics. I thought I read about a "GPU cycle slowdown" recently, but can't find a link. But miners seem to be grumpy about the effectiveness of 1080s recently.
A lot depends on the size of the hash. Bitcoin-style hashing works on ASICs and FPGAs. Etherium and other stuff that depends on GPUs requires hashing more data (or otherwise needs bandwidth that is cheaper to buy in GPU levels of bulk).
There isn't a hard and fast requirement to use GPUs. Just that coin designers know that GPU-based miners are out there and will provide amazing amounts of hashes for new coins they might think will make money (and will be even more interested in coins where they know they won't be displaced by ASIC-based miners).
The market for this type of thing is really weird. As long as the coins return money higher than the cost of electricity, they will keep buying up cards (and hashing ASICS). Once it stops, GPUs will split between places of sufficiently cheap electricity (until they become few and far between) and gamers (and of course suckers who will lose money mining). Don't expect a remotely rational market from anything involving "mining" (crypto or precious metals).
I've bought a few over the years, and have had few issues. My best guess is that nearly all of them were simply returned for whatever reason, cleaned, tested and sold as "refurbished" [a great example was a 6800LE: this was a card where half the GPU was disabled. I got a great price on a "refurbished" card, but could only enable one of the two disabled cores (for 75% of a full 6800). I strongly suspect that it was returned for failing the silicon lottery.]
These cards may have had a reduced lifespan, but I didn't notice as Moore's law was so good to GPUs for so long, replacing an old GPU with a cheap new GPU with a limited lifespan was nearly always a great deal. Don't expect this to remain true.
I'd also be less concerned with mining cards. Typically making money with mining requires reducing electricity costs, so mining GPUs "should" be underclocked while mining and temperatures should be in the "doesn't use up any lifespan at all" range. There's always the danger of getting one used in a dorm or parent's basement and wasn't worried about the cost of electricity, but presumably that is a small fraction of mining cards.
Just make sure the thing has some kind of warranty. It might be 90 days (I'd take 30 if I had to), but while "refurbished" likely means "cleaned and tested" (yes, I've been there done that) it can also means "some one slapped new capacitors on without checking why the old ones blew" (seen that as well). Usually it was just the capacitor, but if it was something in the drivers/inductors/whatever, expect it to blow again quickly. You want it to still be under warranty if that happens (it hasn't happened to me as a used, and the devices I was working on were cash registers not GPUs).
If many/most can do 4.3GHz, that will be great. But for once AMD has a product that stands on its own and they don't need to give the traditional "AMD discount" (although Intel finally got around to producing two more cores at similar prices, so expect AMD to be forced to undercut them again). I was really hoping for unlockable cores with zen, no such luck.
Look at what they are charging for Bulldozers (hint: buy a pentium if you can't afford the cheapest zen [or DDR4]), or Vega/GPUs for that matter. Expect them to rival coffee lake roughly as well as zen rivaled kaby lake, but not much more.
Raw clockspeed and single threaded performance is their weak spot (and it shows up most in gaming). Fixing that would help a lot (unfortunately, I'm not sure that is a clear goal for Zen2. The big money is in servers and laptops, and going much over 4GHz won't buy you much there).
Don't expect miracles (Zen is enough of a miracle). Expect the clock wall to move a bit (presumably the clock distribution circuits are slower than the rest of the chip needs) but don't expect to quite catch intel's clockspeeds. Also don't expect to get current prices (Microcenter is selling a 6 core Zen + motherboard for $220 (with $20 rebate). The catch is only two memory slots (harsh thanks to current RAM prices).
I'd certainly want to see Zen+ and have some idea of Intel's meltdown strategy before committing to a CPU (should be known in April).
That's a ton of RAM (indicating perhaps a little more hardcore than I expected) but no spinning rust for bulk storage. Ugandan_knuckes includes 1TB for $46, Hitachi also has 2TB for $60 and 3TB for $65. I'd go to 16GB of RAM, and add spinning storage and a monitor I'd enjoy watching.
Remember, you only need all that performance at performance bottlenecks, but you will be staring at that monitor allthe*time. Make sure you are happy with it (same for keyboard and mouse, but the monitor stood out as included and a parametric choice). I doubt that the 2400G will have any trouble maxing out the monitor.
PS: if this is your first build, don't forget the windows license (yes, nobody ever puts it in the parts list, but it is a nasty surprise once you've already blown your budget). Video editing on Linux isn't so bad, but gaming sucks.
What resolution monitor are you using, and are you getting 60fps (or whatever your goal is)?
I'm running a 560ti and it is working fine in 1920x1080 (although mainly a unity game (KSP) and an ancient MMO (LOTRO), modern stuff might crash the performance). Whatever you find, I'd try to find benchmark comparisons between a 660ti and your contender at the resolution you use. You might want to save your money.
I'd have said an AMD480, but it looks like they are sold out thanks to cryptominers. But check to see if the improvement of a 1060 is enough (not applicable if your monitor is higher than 1920x1080).