• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: December 22nd, 2023

help-circle
  • destruction of uranium mining is far less than the mining of rare earth metals, coal, oil, gas, iron, copper, bauxite, can i keep going? You need VASTLY less uranium than ANY of these other materials. It’s quite literally a non concern at scale.

    the toxic cooling water

    you clearly understand nothing about nuclear power, Do you live next/nearby a nuclear power plant? If so can you tell me what plant it is so i can do some research on it? Even if i grant you this argument, in the BWR design, which is ancient and hasnt been used in 20 years, which is technically going to have radiation products in the primary turbine loop (the cooling loop is mechanically isolated and has ZERO radiation products in it, unless it fails, and even if it DID fail, it would decay so quickly the chance of it causing harm is going to be almost zero, not to mention that the plant would probably shut down very quickly.

    If we’re talking modern reactor designs, like the PWR, they have a primary pressurized loop, which is going to have radiation products in it, however this is also a pressurized loop and unsuitable for running a turbine, so it’s going to be coupled to a heat exchanger for the turbine loop, which is then also going to be coupled to another heat exchanger so the chances of BOTH of these loops failing and releasing radiation products is quite literally, impossible. Even TMI had zero known radiation products released, there have been groups and studies claiming that there was, but those were not suitably backed up, and provide no significant proof, there’s also tons of evidence against these claims, notably the reactor PCV wasn’t penetrated, meaning it was entirely contained, so it’s extremely unlikely any amount of radiation got outside of that containment, and if we did, we would know about.

    Fukushima is probably the go to point out here, but fukushima was a BWR reactor, and uh, fucking exploded. I only know of three nuclear incidents where reactors exploded, one being chernobyl, an objectively bad reactor core design, SL1 which was user error, and a bad design. And well, fukushima, which was user error, bad design, bad regulation, and bad handling. TMI just melted, so nothing funny happened there.

    little bonus tidbit here, if we’re talking modern designs, which are going to be either gas or metal/salt cooling based, where it’s practically impossible to have a significant failure event, especially with designs like the SSR. Even if you did manage to spill metal/salt fuel it’s going to be self contained within the fuel itself. The SSR design takes this one step farther and puts the fuel into fuel rods, which then sit in a salt pool.

    spend radioactive fuel rods

    these are only a problem for certain reactor designs, designs like the CANDU reactor, and other fast reactor designs (any molten salt/metal reactor is by definition a fast reactor btw) can actually burn the spent waste from PWR designs as fuel, bringing it down to a much safer less significant point in the product chain, by that point encasing them in concrete is going to entirely absorb all of the radiation emitted, and any sort of criticality incident is going to be impossible. And if you’re REALLY concerned about these casks, go put them far underground in a big deep hole.

    contaminated machinery.

    we’ve literally been working with this shit since nuclear bombs, contamination is quite literally a solved problem, some reactor designs even burn straight unprocessed uranium, though the after products are particularly nasty, those can also be burnt off

    Real clean…

    compared to something like coal? Absolutely, even when comparing to the fabled wind and solar energy, it’s still right up beside them in terms of the rankings. Nuclear power is only bad if you’re scared of it.




  • yeah, i’m definitely not as aggressive on that, but then again i also dont really like having a lot things on my network, or connected to my grid, so i suppose i just sort of optimize that problem out. Plus like i said, convenience, running 120v and 240v is going to be significantly more beneficial for me since i primarily use high wattage draw devices that would benefit from more efficient transmission and conversion (servers and any high power switching power supply basically) i’ve thought about doing a low voltage network, but that really only seems like it’s going to be a bigger mess, for no real significant gain, i have to have central DC conversion and regulation now? I’m just not sure it’s worth it, unless i’m pulling it straight from a dedicated battery bank or something, but that doesn’t really make any sense to me. I might end up using lower voltage LED products for a lot of lighting, but i think i would rather have a handful of high quality high efficiency power supplies, rather than a global one and some weird ass 48v system where i need to convert from AC natively, unless i’m doing some really weird shit, and then down/up convert to any device as needed. It seems like a bit much for removing the AC conversion part of the problem, but that’s just me i guess.

    One of the nice things about 120/240 is that our grid is sort of designed for it, so there are some clever ways you can go about utilizing it appropriately. Certain plug specs use both hot/live legs, and neutral (plus ground) so you can technically pull 120/240 voltage out of a single plug, which is quite the trick. You could also fairly easily wire up both of these in more standardized outlet receptacles as well. (although i dunno what the electric code looks like for this one)

    My ultimate goal would be doing a decentralized off grid production/storage solution, so high efficiency on higher draws is going to be really important, as well as the ability to standardize on a widely accepted voltage standard. The only real advantage i can think of to using DC grid, is that it would be safer, but like, that’s a solved problem so idk.

    personally im not huge on smart grid stuff, though i like the idea of smart grid management, being able to do “useful” things with excess generated power, or pull from storage banks at will given a certain rule set defined under a smart home system is way too convenient to ignore.



  • it literally is clean, the only dirty thing about it is building a nuclear plant, and the mining of uranium, the only unique thing here being the mining of uranium, and technically the scale of construction, but im still not convinced that a nuclear plant produces more CO2 in construction phase, than it will offset in it’s lifetime, maybe solar and wind edge it out, but again, nuclear energy already exists, it’s a heavily established industry and well regulated, so it’s not like it should be the first focus on the chopping block. Especially compared to all the modern problems we have with solar, like the rare earth metals, and mining conditions often experienced. Wind turbines are better, but have issues with scaling, and waste.




  • yeah and uh, idk if you noticed, generally more than one person lives in an apartment building, it’s about as good as it’s going to get unless you’re installing solar from tax payer money, or utility company money.

    While tracking might let you collect more energy, you also lose more of your balcony, and you’re back to making the install expensive and complicated. Not worth it

    dont use tracking on a balcony??? Also not all tracking setups are expensive and complicated, the entire reason you would want to do them is to greatly increase the total amount of power production throughout the day, and you can very easily calculate the complexity cost, maintenance cost, and additional install cost over to the potential saved/produced value of the array post installation.

    I mean if you’re doing 2 axis tracking, sure it’s probably more expensive, but one axis tracking is still reasonably effective, especially if you’re in a decent spot and able to take advantage of it. The other option is installing more panels total, and when you’re space limited, that’s going to become a constraint.



  • in an apartment specifically? Why would you build your own house when you can build a large building and then live in segmented housing blocks within that building?

    It’s literally just breaking the entire idea of apartment block housing for the purposes of providing less usable, less functional solar power. If you want to do your own install on top of an existing install on the apartment, go ahead, nobody is going to stop you, but you would see more returns if you managed to install solar directly on the roof of the building in the first place. Economy of scale is going to be advantageous for you in literally any case, that’s just the truth.


  • ok so, even if we assume that you NEED to do this, which is an errant assumption on the basis of “you can just not have that problem through over provisioning” you could use that extra generated power for other things, selling to industry, energy storage, a community center whatever, there are literally endless things you could do with free power, most often you just dump it into heating since it’s cheap, and storing it is fairly trivial in something like water.

    At worst possible case scenario, your required grid imports are still going to be less than they currently are, which means less external grid maintenance, and less strain.

    Granted it’s not going to be used year round, unless of course, you over provision production and consume in the winter, and produce in the summer, where now you’re getting effectively double the usage, if not more. You probably won’t reach peak micro grid infrastructure, but the flexibility providing by something like solar is worth the consideration. A really good example of this is actually the texas power grid, although that is a pretty large power grid, i never said you should island micro generation, just that it’s likely going to be beneficial.



  • not very much, especially during the winter, the best way to optimize panel production is by pointing it towards the sun most effectively, the farther north, or south, of the equator the less effective it is, the less directly it points towards the sun in general, the less power you make.

    It might still produce a decent amount of power overall, through a reasonable period of time, but it’s probably WELL below what you could be making with an optimized install, especially one with solar tracking, granted some solar power is still better than no solar power, so you do get tradeoffs at the end of the day.

    as another commenter said, there are solar power calculators out there, if you’re looking for rough figures, use them.




  • microgeneration purely in DC only really makes sense in stuff like campers and RV’s where you’re going to be using primarily nearby, low power consumption devices.

    AC is still better, plus modern switching technology while still fairly expensive, is considerably more efficient now. If you’re doing AC you also get a number of other benefits, notably, literally every existing appliance and device uses and works with AC voltages, the entire standard around electricity and home wiring is based on AC mains, all of the accessible hardware is also produced for AC mains, not that you can’t use it for something else, it’s just not intended for that.

    Certain appliances will use induction motors, and similar other tech (clocks for example, often use the frequency of the power grid to keep time) based directly on the AC sinewave. You could still run them on DC, it’s just significantly sillier. Plus transmission efficiency is a BIG loss in DC (even now with modern solid state switching components, it’s still just, not ideal), granted thats less of a problem on a micro grid scale, it’s still a concern and potential restriction, nothing beats the simplicity and reliability of a simple wire wound iron core transformer. There are a handful of other technical benefits, and drawbacks as well, but fairly minor.

    Having a dedicated DC supply side might be nice for a home environment, but the question is what do you standardize on? DC/DC voltage conversion is fairly efficient as it is already. Converting from AC/DC is incredibly easy and not particularly inefficient at lower power consumption, it’s more of a problem with higher draw devices. But you can easily get around that by using a higher voltage to convert down from.


  • a mix of both is good, there’s arguments for doing local co-generation. Where you essentially turn a community into it’s own power plant, and when you’re talking about things like micro inverters, the cost doesnt really change.

    Is it more efficient to do it at a utility grid scale? Yes, does that make it overall better? Not really, you still have to deal with grid inefficiencies, and maintenance, and well, you still have to deal with installations, so the cost isn’t that significant at the end of the day.

    Solar is one of very few renewable energy sources that you can actually locally build and maintain on a small scale, no sense in removing that utility from it, that’s part of the reason it’s so popular.