While peat regenerates, it does so slowly, so they might actually have burned off some measurable elevation (on average)
While peat regenerates, it does so slowly, so they might actually have burned off some measurable elevation (on average)
That’s what major versions are for - breaking changes. Regardless, you should probably be able to fix this with some regex hackery. Something along the lines of
new_file_content = re.sub(r'(?<=\bprint)(\s+)(?!\()', '(', old_file_content)
new_file_content = re.sub(r'(print\(.*?)(\n|$)', r'\1)', new_file_content)
should do the trick.
For someone starting out, I would say that a major advantage of Python over any compiled language is that you can just create a file and start writing/running code. With C++ (which I’m also a heavy user of) you need to get over the hurdle of setting up a build system, which is simple enough when you know it, but can quickly be a high bar for an absolute beginner. That’s before you start looking at things like including/linking other libraries, which in Python is done with a simple import
, but where you have to set up your build system properly to get things working in C++.
Honestly, I’m still kind of confused that the beginner course at my old university still insists on giving out a pre-written makefile and vscode config files for everyone instead of spending the first week just showing people how to actually write and compile hello world using cmake
. I remember my major hurdle when leaving that course was that I knew how to write basic C++, I just had no idea how to compile and link it when I could no longer use the makefile that we were explicitly told to never touch…
Idk why you guys are so passionate about this whole rounding thing? Rounding off 107 to 100 doesn’t change the information, only the precision. It’s not easier to interpret 200 than 212 or anything?
If you want quick conversion, just
F ≈ 2 * C + 30
Centi = 1e-2, deci = 1e-1
Regards,
Non-American
Yes, it’s a field. Specifically, a field containing human-readable information about what is going on in adjacent fields, much like a comment. I see no issue with putting such information in a json file.
As for “you don’t comment by putting information in variables”: In Python, your objects have the __doc__
attribute, which is specifically used for this purpose.
I never understood that. Apparently they use it as a primary way of messaging each other? At least that’s what younger relatives have told me. I’ve tried to have them explain what makes the app designed to hide/delete stuff after it’s been read better for communication, but so far haven’t gotten an explanation I could make any sense of.
“Enshittification will continue until revenue improves”
I’ve found that regex is maybe the programming-related thing GPT is best at, which makes sense given that it’s a language model, and regex is just a compact language with weird syntax for describing patterns. Translating between a description of a pattern in English and Regex shouldn’t be harder for that kind of model than any other translation so to speak.
In general I agree: ChatGPT sucks at writing code. However, when I want to throw together some simple stuff in a language I rarely write, I find it can save me quite some time. Typical examples would be something like
“Write a bash script to rename all the files in the current directory according to <pattern>”, “Give me a regex pattern for <…>”, or “write a JavaScript function to do <stupid simple thing, but I never bothered to learn JS>”
Especially using it as a regex pattern generator is nice. It can also be nice when learning a new language and you just need to check the syntax for something- often quicker than swimming though some Geeks4Geeks blog about why you should know how to do what you’re trying to do.
My test suite takes quite a bit of time, not because the code base is huge, but because it consists of a variety of mathematical models that should work under a range of conditions.
This makes it very quick to write a test that’s basically “check that every pair of models gives the same output for the same conditions” or “check that re-ordering the inputs in a certain way does not change the output”.
If you have 10 models, with three inputs that can be ordered 6 ways, you now suddenly have 60 tests that take maybe 2-3 sec each.
Scaling up: It becomes very easy to write automated testing for a lot of stuff, so even if each individual test is relatively quick, they suddenly take 10-15 min to run total.
The test suite now is ≈2000 unit/integration tests, and I have experienced uncovering an obscure bug because a single one of them failed.
Ngl you had me until the 1772 bit
This is a very “yes but still no” thing in my experience. Typically, I find that if I write “naive” C++ code, where I make no effort to optimise anything, I’ll outperform python code that I’ve spent time optimising by a factor of 10-30 (given that the code is reasonably complex, this obviously isn’t true for a simple matrix-multiplication where you can use numpy). If I spend some time on optimisation, I’ll typically be outperforming python by a factor of 50+.
In the end, I’ve found it’s mostly about what kind of data structures you’re working with, and how you’re passing them around. If you’re primarily working with arrays of some sort and doing simple math with them, using some numpy
and scipy
magic can get you speeds that will beat naive C++ code. On the other hand, when you have custom data structures that you want to avoid unnecessarily copying, just rewriting the exact same code in C++ and passing things by reference can give you massive speedups.
When I choose C++ over python, it’s not only because of speed. It’s also because I want a more explicitly typed language (which is easier to maintain), overloaded functions, and to actually know the memory layout of what I’m working with to some degree.
The point the other commenter is making, which I fully agree with, is that I can have legitimate reasons for not wanting to update. Windows shoving updates down my throat when they can potentially break critical stuff on my machine is pretty much just equivalent to forcing malware on me.
Of course, Li-ion batteries will never cover large-scale power demand. Not primarily because of lack of lithium, but because it’s a technology that scales far too poorly into the MWh/TWh scale, and has a far too short lifetime.
The battery tech we need for truly large scale storage is different from what we need for small, portable storage. Stuff like redox-flow batteries are looking promising.
There’s also hydrogen, with different storage methods being actively researched- from direct storage to using ammonia as a carrier.
The issue with using mechanical storage (like pumped hydropower) is threefold (off the top of my head):
I’m not saying pumped hydropower isn’t part of the solution: I believe the solution is that we need many solutions. I just think it’s important to point out that battery tech isn’t some monolithic thing, and that there are issues with pumped hydropower (and mechanical storage in general).
And that’s just regarding stuff that’s distributed pre-built with a package manager. Truth is, if you’re down to build stuff from source, you can just follow the Linux guide and everything will work right out of the box far more often than not.
I’ve been doing the same thing, went back to read it now, and I have to admit I had a good time. Even though it took time to manually turn my comments into gibberish, it gave some hilarious results!
Came here to say this. Just get home brew up and running. One you have gcc and your other basic tools installed, there’s very few Linux guides that won’t work on a Mac. A couple shell tools have different names, but that’s about it.
I would say “debunked” in the sense that quantum mechanics correctly predicts phenomena that don’t exist in classical physics, and relies on the idea that quantum particles obey a probability distribution, rather than deterministic mechanics.
Quantum mechanics appears to work so well for these phenomena compared to deterministic mechanics that it’s tempting to say that the actual universe is in fact governed by probabilities rather than determinism.
I would argue that all physical models of the universe are just that: Models. We can get asymptotically closer to a perfect description of the universe, but no model can ever tell us the true nature of the underlying system it is describing, just be an arbitrarily good description of it.