68
submitted 4 months ago* (last edited 4 months ago) by LainTrain@lemmy.dbzer0.com to c/programming@programming.dev

I've found lots of examples in C of programs illustrating buffer Overflows, including those of pointer rewrites which has been of great help in understanding how a buffer overflow works and memory safety etc. but I've yet to be able to find an example illustrating how such a buffer overflow can rewrite a pointer in such a way that it actually results in code execution?

Is this just not a thing, or is my google-fu rust y? Tried ChatGPT and my local Mistral and they both seem unable to spit out precisely what I'm asking, so maybe I'm wording this question wrong.

If anyone in here knows, could point me in the right direction? Thanks y'all btw love this community 🧡

top 27 comments
sorted by: hot top controversial new old
[-] paw@feddit.org 26 points 4 months ago* (last edited 4 months ago)

This is an old paper that it explains the basics: https://www.eecs.umich.edu/courses/eecs588/static/stack_smashing.pdf

Today there are a lot of mitigations where the steps of the paper don't work anymore, but the general ideas should be still valid. I'll hope you find the example you are looking for in there.

On another note: What is your intention? And can I participate 😈

[-] tal@lemmy.today 10 points 4 months ago

Today there are a lot of mitigations where the steps of the paper don’t work anymore,

Yeah, that's fair to note. For a number of environments today, the base address of the stack is randomized, which is aimed at making it hard to push a static offset and exploit buffer overflows targeting the stack:

https://en.wikipedia.org/wiki/Address_space_layout_randomization

https://en.wikipedia.org/wiki/Buffer_overflow_protection

Historically part of exploiting such a buffer overflow might include the malicious code to be invoked, as a way to get it into memory, and the introduction of the NX bit to x86 permitted OSes to mark regions of memory to the CPU's MMU as only being able to contain data, not executable code. This meant that it became significantly harder to have a buffer overflow that both seized control of the instruction pointer and contained hostile code.

[-] paw@feddit.org 4 points 4 months ago

Thanks for your reply. This extends "smashing the stack for fun and profit" in an important way.

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago* (last edited 4 months ago)

Interesting point.

This makes it seem like the whole concern about memory safety has become almost redundant, the chances of exploitation are just so remote, it must take incomprehensible work to discover a functional exploit that would be useful to attackers in modern software

[-] LainTrain@lemmy.dbzer0.com 10 points 4 months ago

I just want to learn in more practical terms how exploits like this function in the wild haha, but eventually I do hope to become a C chad and even an assembly chad and understand how computers actually work and perhaps shake the impostor syndrome of being a skid neesus monkey when it comes to pentesting and do something worth doing :)

[-] tal@lemmy.today 24 points 4 months ago* (last edited 4 months ago)

So, I'm not going to discourage people from doing stuff that they're interested in, but if you're interested in heading down doing low-level security stuff like this as a career path, I'll leave you with this piece of advice, which I think is probably more-valuable than the actual technical information I provided here:

A very great amount of security knowledge, and especially low-level security knowledge, has a short shelf life. That is, you spend time to understand something, and it's only useful for a limited period of time.

In software engineering, if you spend time to learn an algorithm, it will probably be useful pretty much forever; math doesn't change. Sometimes the specific problems that an algorithm is especially useful for go away, and sometimes algorithms are superseded by generally-superior algorithms, but algorithms have a very long shelf life.

Knowledge of software engineering at, say, a programming language level don't last as long. There's only so much demand for COBOL programmers today, though you can use things learned in one environment in another, to a fair degree. But as long as you choose a programming language that is going to be around for a long time, the knowledge you spend time acquiring can continue to be valuable for decades, probably your entire career.

Knowledge of a particular program has a shorter life. There are a very few programs that will be around for an extremely long period of time, like emacs, but it's hard to know in advance which these will be (though my experience has been that open-source software tends to do better here). For example, I have gone through various version control systems -- CVS, VCS, SVN, Bitkeeper, mercurial, git, and a handful of others. The time I spent learning the specifics of most of those is no longer very useful.

Your professional value depends on your skillset, what you bring to the table. If you spend a lot of time learning a skill that will be applicable for your entire working life, then it will continue to add to the value that you bring to the table for your entire working life. If you spend a lot of time learning a skill that will not be of great use in five or ten years, then the time you invested won't be providing a return to you after that point.

That does not mean that everything in the computer security world has a short shelf life. Things like how public/private key systems work or understanding what a man-in-the-middle attack is remain applicable for the long haul.

But a lot of security knowledge involves understanding flaws in very specific systems, and those flaws will go away or become less relevant over time, and often dealing with low-level security, implementation characteristics of specific systems is an example of such a thing.

The world does need low-level computer security experts.

But I would suggest to anyone who is interested in a career in computer security to, when studying things, to keep in mind the likely longevity of what they are teaching themselves and ask themselves whether they believe that that knowledge will likely be relevant at the time that they expect to retire. Everyone needs to learn some short-shelf-life material. But if one specializes in only short-shelf-life things, then they will need to be committing time to re-learn new short-shelf-life material down the line as their current knowledge loses value. I'd try to keep a mix, where a substantial portion of what I'm learning will have a long shelf life, and the short-shelf-life stuff I learn is learned with the understanding that I'm going to need to replace it at some point.

I've spent time hand-assembling 680x0 and x86 code, have written exploits dependent upon quirks in particular compilers and for long-dead binary environments. A lot of that isn't terribly useful knowledge in 2024. That's okay -- I've got other things I know that are useful. But if you go down this road, I would be careful to also allocate time to things that you can say with a higher degree of confidence will be relevant twenty, thirty, and forty years down the line.

[-] rollmagma@lemmy.world 6 points 4 months ago

Words of wisdom right here.

Personally, what bothers me about the security field is how quickly it becomes a counterproductive thing. Either by forcing people to keep working on time consuming processes like certifications or mitigation work (e.g. see the state of CVEs in the linux kernel) or simply by pumping out more and more engineers that have never put together a working solution in their lives. Building anything of value is already hard as it is nowadays.

[-] LainTrain@lemmy.dbzer0.com 3 points 4 months ago* (last edited 4 months ago)

Yeah I fully agree with the former in particular. The GRC side of things is just not that interesting and not that valuable. I do vulnerability management as a job which is somewhat depressing after a cybersec MSc, but honestly a job is a job and I don't really believe it produces that much value to anyone, but neither does the entire company I work for, the entire system we live under incentivizes waste, and who am I to argue as long as I get paid?

When it comes to low level stuff, this is purely curiousity and self-fulfillment of understanding for me.

[-] litchralee@sh.itjust.works 5 points 4 months ago* (last edited 4 months ago)

A commenter already provided a fairly comprehensive description of low-level computer security positions. But I also want to note that a firm foundation in low-level implementations is also useful for designing embedded software and firmware.

As in, writing or deploying against custom BIOS/UEFI images, or for real-time devices where timing is of the essence. Most anyone dealing with an RTOS or kernel drivers or protocol buses will necessarily require an understanding of both the hardware architecture plus the programming language available to them. And if that appeals to you, you might consider looking into embedded software development.

The field spans anything from writing the control loop for washing machines, to managing data exchange between multiple video co-processors onboard a flying drone to identify and avoid collisions, to negotiating the protocol to set up a 400 Gbps optical transceiver to shoot a laser down 40 km of fibre.

If something "thinks" but doesn't have a monitor and keyboard, it's likely to have one or more processors running embedded software. Look around the room you're in and see what this field has enabled.

[-] astrsk@kbin.run 4 points 4 months ago* (last edited 4 months ago)

Since you are interested in practical examples, I would recommend you watch and maybe even follow along with Ben Eater’s 6502 breadboard computer series on YouTube (piped link). The kit is cheap and works great but more importantly it introduces so many core concepts about how computers actually work from a raw metal and machine code standpoint while touching on so many different aspects about computers that still apply today.

[-] LainTrain@lemmy.dbzer0.com 2 points 4 months ago

Yeah I'm a big fan of this actually! I remember putting together a half adder with some breadboards, never got to the full computer because I didn't have the drive to do so and felt like I understood the concepts well enough, but yes this is awesome!

[-] paw@feddit.org 3 points 4 months ago

That's what they all say 😉

Jokes aside: have fun.

[-] LainTrain@lemmy.dbzer0.com 3 points 4 months ago* (last edited 4 months ago)

This resource was 100% exactly what I was looking for. Now gonna setup an env and play with the examples! Thanks so much!

[-] catch22@programming.dev 15 points 4 months ago* (last edited 4 months ago)

I really like this video, in it he demonstrates how a char pointer can be exploited to alter the return value in the stack and walks through an example of how it's done. https://www.youtube.com/watch?v=1S0aBV-Waeo

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago

Love computerphile!

[-] tal@lemmy.today 14 points 4 months ago* (last edited 4 months ago)

There are various approaches, but the most common one in an x86 environment is overwriting the return address that was pushed onto the stack.

When you call a function, the compiler generally maps it to a CALL instruction at the machine language level.

At the time that a CALL instruction is invoked, the current instruction pointer gets pushed onto the stack.

kagis

https://www.felixcloutier.com/x86/call

When executing a near call, the processor pushes the value of the EIP register (which contains the offset of the instruction following the CALL instruction) on the stack (for use later as a return-instruction pointer). The processor then branches to the address in the current code segment specified by the target operand.

A function-local, fixed-length array will also live on the stack. If it's possible to induce the code in the called function to overflow such an array, it can overwrite that instruction pointer saved on the stack. When the function returns, it hits a RET instruction, which will pop that saved instruction pointer off the stack and jump to it:

https://www.felixcloutier.com/x86/ret

Transfers program control to a return address located on the top of the stack. The address is usually placed on the stack by a CALL instruction, and the return is made to the instruction that follows the CALL instruction.

If what was overwriting the saved instruction pointer on the stack was a function pointer to malicious code, it will now be executing.

If you're wanting to poke at this, I'd suggest familiarizing yourself with a low-level debugger so that you can actually see what's happening first, as doing this blindly from source without being able to see what's happening at the instruction level and being able to inspect the stack is going to be a pain. On Linux, probably gdb. On Windows, I'm long out of date, but SoftICE was a popular low-level debugger last time I was touching Windows.

You'll want to be able to at least set breakpoints, disassemble code around a given instruction to show the relevant machine language, display memory at a given address in various formats, and single step at the machine language level.

I'd also suggest familiarizing yourself with the calling convention for your particular environment, which is what happens at a machine language level surrounding the call and return from a subroutine. Such buffer overflow attacks involve also overwriting other data on the stack, and understanding what is being overwritten is going to be necessary to understand such an attack.

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago

Thanks! This is very helpful

[-] mox@lemmy.sdf.org 4 points 4 months ago

I think you might find some illustrations & examples of what you want by searching for return-oriented programming, rather than just buffer overflows.

[-] sharky5740@techhub.social 2 points 4 months ago* (last edited 4 months ago)

@LainTrain There used to be approximately a million examples floating around in the web. You could just write a simple program with a fixed-size stack buffer at a repeatable address, overflow a return address with a crafted string, return to the overwritten stack buffer full of shellcode. All of the mitigations (stack canaries, W^X, ASLR, CFI, canonical addresses, ...) mean that you have to either use much more elaborate techniques (ROP/return to libc, address leaks, ...) or you have to disable the mitigations to see a working exploit example, which is pretty unimpressive.

[-] LainTrain@lemmy.dbzer0.com 3 points 4 months ago

Thanks! The reason I was looking for an example is because I understand:

overflow a return address with a crafted string, return to the overwritten stack buffer full of shellcode

In principle, but not in practice. Especially the last part.

I have my char buf[16] and some char * ptr = buf; and then a gets() gets a 20 char string, causing a buffer overflow either then or when the buffer is read where it reads out of bounds.

I've done this many times, sometimes intentionally, and if I visualize the memory as one continuous line where the ptr is stored at the precise address buf[20] is at, allowing me to write into that memory location a new address for the pointer by having part of the string given to gets() be a new memory address at the address of ptr, so that next time that pointer is accessed in a program, it leads to an arbitrary memory read, and the arbitrary pointer address can be to still further down in the initial string we gave to gets(), e.g. buf[40] where our shellcode is stored, but how to do this - implement it in practice (so - in code), I don't really know.

Specifically I don't know how to make a pointer at a predictable constant address so it's stored address can be overwritten, and how to make the reading of the resulting maliciously modified pointer also somehow execute code. I'm guessing it can't just be a char pointer reading in data, right?

[-] tal@lemmy.today 3 points 4 months ago* (last edited 4 months ago)

Specifically I don’t know how to make a pointer at a predictable constant address so it’s stored address can be overwritten,

This is one of the things that I mentioned in my above comment on mitigating buffer overflow attacks, that address randomization is one of the mitigations. Are you trying to create an exploit that will function in such an environment?

If so, I'd still start out understanding how the attack works in a non-mitigated environment -- it's simpler -- and then learn about the mitigations and efforts to counter them.

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago

I agree, I think for now I'd like to try to create a demo exploit and exploitable program like this without considering ASLR et al. and then at some point in the future perhaps look at a return to libc type deal to understand that as well.

[-] sharky5740@techhub.social 0 points 4 months ago

@LainTrain The simplest case is overwriting the return address on the stack. If your stack layout looks like this (B for buffer, R for return address, A for function arguments):
BBBBBBBBRRRRAAAA
and you give a pointer to the first B byte to gets(), the input can overwrite the bytes of R.
You can try this with a 32-bit program complied with disabled mitigations. Run the program in a debugger, break in the function, inspect the stack pointer value. With ASLR disabled the addresses will remain the same for every program execution assuming the call graph at this point doesn't change. You can then overwrite the bytes of R with the buffer address (assuming no stack canary), and overwrite the buffer bytes with machine code instructions. When the function attempts to return, it instead jumps to the instructions you left in the buffer, and executes them (assuming no W^X).

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago

Thank you! This is incredibly helpful and insightful.

I now understand how one would do this with manually writing in a debugger, am I correct in thinking that if I constructed the input to gets() in such a manner that BBBBBBB contains shellcode, and RRRR is a return address pointing to the beginning of BBBBB then that is how arbitrary code execution can be achieved with this in practice?

[-] sharky5740@techhub.social 1 points 4 months ago* (last edited 4 months ago)

@LainTrain Yes, but "in practice" this simple approach worked 20 years ago. Modern processors, compilers and operating systems make exploitation of stack buffer overflows a lot more difficult.

[-] LainTrain@lemmy.dbzer0.com 1 points 4 months ago

That's fine, I think for my purposes it's better to start simple with the basic concept of it first, then add complexity by learning about the protections and how they have/could be circumvented.

[-] j4k3@lemmy.world 1 points 4 months ago

IIRC, the Nintendo Game & Watch hack on YouTube covers this with a STM32H7 on the little Mario handheld game from a few years ago.

this post was submitted on 27 Jul 2024
68 points (95.9% liked)

Programming

17314 readers
223 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS