## ~x + ~y == ~(x + y) is always false?

117 votes

Does this code always evaluate to false? Both variables are two's complement signed ints.

``````~x + ~y == ~(x + y)
``````

I feel like there should be some number that satisfies the conditions. I tried testing the numbers between `-5000` and `5000` but never achieved equality. Is there a way to set up an equation to find the solutions to the condition?

Will swapping one for the other cause an insidious bug in my program?

Assume for the sake of contradiction that there exists some `x` and some `y` (mod 2n) such that

``````~(x+y) == ~x + ~y
``````

By two's complement*, we know that,

``````      -x == ~x + 1
<==>  -1 == ~x + x
``````

Noting this result, we have,

``````      ~(x+y) == ~x + ~y
<==>  ~(x+y) + (x+y) == ~x + ~y + (x+y)
<==>  ~(x+y) + (x+y) == (~x + x) + (~y + y)
<==>  ~(x+y) + (x+y) == -1 + -1
<==>  ~(x+y) + (x+y) == -2
<==>  -1 == -2
``````

Hence, a contradiction. Therefore, `~(x+y) != ~x + ~y` for all `x` and `y` (mod 2n).

*It is interesting to note that on a machine with one's complement arithmetic, the equality actually holds true for all `x` and `y`. This is because under one's complement, `~x = -x`. Thus, `~x + ~y == -x + -y == -(x+y) == ~(x+y)`.

## fork() branches more than expected?

94 votes

Consider the following piece of code:

``````int main()
{
int i;
for(i = 0; i < 2; i++)
{
fork();
printf(".");
}
}
``````

This program output is 8 dots. How can that be possible? Should not there be 6 dots instead?

The `fork()` primitive often stretches the imagination. Until you get a feel for it, you should trace out on paper what each operation is and account for the number of processes. Don't forget that fork() creates a near-perfect copy of the current process. The most significant difference (for most purposes) is that `fork()`'s return value differs between parent and child. (Since this code ignores the return value, it makes no difference.)

So, at first, there is one process. That creates a second process, both of which print a dot and loop. On their second iteration, each creates another copy, so there are four processes print a dot, and then exit. So we can easily account for six dots, like you expect.

However, what `printf()` really does is buffer its output. So the first dot from when there were only two processes does not appear when written. Those dots remain in the buffer—which is duplicated at fork(). It is not until the process is about to exit that the buffered dot appears. Four processes printing a buffered dot, plus the new one gives 8 dots.

If you wanted to avoid that behavior, call `fflush(stdout);` after `printf()`.

## Benefits of pure function

51 votes

Today i was reading about pure function, got confused with its use:

A function is said to be pure if it returns same set of values for same set of inputs and does not have any observable side effects.

e.g. `strlen()` is a pure function while `rand()` is an impure one.

``````__attribute__ ((pure)) int fun(int i)
{
return i*i;
}

int main()
{
int i=10;
printf("%d",fun(i));//outputs 100
return 0;
}
``````

http://ideone.com/33XJU

The above program behaves in the same way as in the absence of `pure` declaration.

What are the benefits of declaring a function as `pure`[if there is no change in output]?

`pure` lets the compiler know that it can make certain optimisations about the function: imagine a bit of code like

``````for (int i = 0; i < 1000; i++)
{
printf("%d", fun(10));
}
``````

With a pure function, the compiler can know that it needs to evaluate `fun(10)` once and once only, rather than 1000 times. For a complex function, that's a big win.

## Free memory allocated in a different function?

45 votes

I'm trying to learn C and I'm currently trying to write a basic stack data structure, but I can't seem to get basic `malloc`/`free` right.

Here's the code I've been using (I'm just posting a small part here to illustrate a specific problem, not the total code, but the error message was generated just by running this example code in `valgrind`)

``````#include <stdio.h>
#include <stdlib.h>

typedef struct Entry {
struct Entry *previous;
int value;
} Entry;

void destroyEntry(Entry entry);

int main(int argc, char *argv[])
{
Entry* apple;
apple = malloc(sizeof(Entry));
destroyEntry(*(apple));
return 0;
}

void destroyEntry(Entry entry)
{
Entry *entry_ptr = &entry;
free(entry_ptr);
return;
}
``````

When I run it through `valgrind` with `--leak-check=full --track-origins=yes`, I get the following error:

``````==20674== Invalid free() / delete / delete[] / realloc()
==20674==    at 0x4028E58: free (vg_replace_malloc.c:427)
==20674==    by 0x80485B2: destroyEntry (testing.c:53)
==20674==    by 0x8048477: main (testing.c:26)
==20674==  Address 0xbecc0070 is on thread 1's stack
``````

I think this error means that the `destroyEntry` function is not allowed to modify memory allocated explicitly in main. Is that right? Why can't I just `free` the memory I allocated in `main` in another function? (and is this behavior somehow specific to main?)

Whenever you pass a parameter to a function, a copy is made, and the function works on that copy. So in your case, you are trying to `free` a copy of the original object, which doesn't make any sense.

You should modify your function to take a pointer, and then you can have it call `free` directly on that pointer.

## Including header files in C/C++ just once

37 votes

Is it ever useful to include a header file more than once in C or C++?

If the mechanism is never used, why would the compiler ever worry about including a file twice; if it really were useless, wouldn't it be more convenient if newer compilers made sure every header is included only once?

Edit:

I understand that there are standard ways of doing things like include guards and pragma once, but why should you have to specify even that? Shouldn't it be the default behavior of the compiler to include files only once?

Yes, it's useful when generating code with the preprocessor, or doing tricks like Boost.PP does.

For an example, see X Macros. The basic idea is that the file contains the body of the macro and you `#define` the arguments and then `#include` it. Here's a contrived example:

macro.xpp

``````std::cout << MESSAGE;
#undef MESSAGE
``````

file.cpp:

``````int main() {
# define MESSAGE "hello world"
# include "macro.xpp"
}
``````

This also allows you to use `#if` and friends on the arguments, something that normal macros can't do.

## Binary, Floats, and Modern Computers

26 votes

I have been reading a lot about floats and computer-processed floating-point operations. The biggest question I see when reading about them is why are they so inaccurate? I understand this is because binary cannot accurately represent all real numbers, so the numbers are rounded to the 'best' approximation.

My question is, knowing this, why do we still use binary as the base for computer operations? Surely using a larger base number than 2 would increase the accuracy of floating-point operations exponentially, would it not?

What are the advantages of using a binary number system for computers as opposed to another base, and has another base ever been tried? Or is it even possible?

Computers are built on transistors, which have a "switched on" state, and a "switched off" state. This corresponds to high and low voltage. Pretty much all digital integrated circuits work in this binary fashion.

Ignoring the fact that transistors just simply work this way, using a different base (e.g. base 3) would require these circuits to operate at an intermediate voltage state (or several) as well as 0V and their highest operating voltage. This is more complicated, and can result in problems at high frequencies - how can you tell whether a signal is just transitioning between 2V and 0V, or actually at 1V?

When we get down to the floating point level, we are (as nhahtdh mentioned in their answer) mapping an infinite space of numbers down to a finite storage space. It's an absolute guarantee that we'll lose some precision. One advantage of IEEE floats, though, is that the precision is relative to the magnitude of the value.

Update: You should also check out Tunguska, a ternary computer emulator. It uses base-3 instead of base-2, which makes for some interesting (albeit mind-bending) concepts.

## Is const a lie? (since const can be cast away)

22 votes

Possible Duplicate:
Sell me on const correctness

What is the usefulness of keyword `const` in `C` or `C++` since it's allowed such a thing?

``````void const_is_a_lie(const int* n)
{
*((int*) n) = 0;
}

int main()
{
int n = 1;
const_is_a_lie(&n);
printf("%d", n);
return 0;
}
``````

Output: 0

It is clear that `const` cannot guarante the non-modifiability of the argument.

`const` is a promise you make to the compiler, not something it guarantees you.

Because of the `const`, the compiler is allowed to assume that the value won't change, and therefore it can skip rereading it, if that would make the program faster.

In this case, since `const_is_a_lie()` violates its contract, weird things happen. Don't violate the contract. And be glad that the compiler gives you help keeping the contract. Casts are evil.

## Call a function before main

16 votes

Possible Duplicate:
Is main() really start of a C++ program?

Is possible to call my function before program's startup? How can i do this work in `C++` or `C`?

You can have a global variable or a `static` class member.

1) `static` class member

``````//BeforeMain.h
class BeforeMain
{
static bool foo;
};

//BeforeMain.cpp
#include "BeforeMain.h"
bool BeforeMain::foo = foo();
``````

2) global variable

``````bool b = foo();
int main()
{
}
``````

Note this link - http://www.parashift.com/c++-faq-lite/ctors.html#faq-10.14 - posted by Lundin.

## strange C integer inequality comparison result

15 votes
``````#include <limits.h>
#include <stdio.h>
int main() {
long ival = 0;
printf("ival: %li, min: %i, max: %i, too big: %i, too small: %i\n",
ival, INT_MIN, INT_MAX, ival > INT_MAX, ival < INT_MIN);
}
``````

This gives the output:

``````ival: 0, min: -2147483648, max: 2147483647, too big: 0, too small: 1
``````

How is that possible?

(I actually got hit by this problem/bug in CPython 2.7.3 in `getargs.c`:`convertsimple`. If you look up the code, in `case 'i'`, there is the check `ival < INT_MIN` which was always true for me. See also the test case source with further references.)

Well, I tested a few different compilers now. GCC/Clang, compiled for x86 all return the expected (too small: 0). The unexpected output is from the Clang in the Xcode toolchain when compiled for armv7.

If you want to reproduce:

This is the exact compile command: `/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -arch armv7 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk test-int.c`

This is Xcode 4.3.2.

I copied the resulting `a.out` over to my iPhone and executed it.

If anyone is interested in the assembler code generated by this:

``````    .section    __TEXT,__text,regular,pure_instructions
.section    __TEXT,__textcoal_nt,coalesced,pure_instructions
.section    __TEXT,__const_coal,coalesced
.section    __TEXT,__picsymbolstub4,symbol_stubs,none,16
.section    __TEXT,__StaticInit,regular,pure_instructions
.syntax unified
.section    __TEXT,__text,regular,pure_instructions
.globl  _main
.align  2
.code   16
.thumb_func _main
_main:
push    {r7, lr}
mov r7, sp
sub sp, #20
movw    r0, #65535
movt    r0, #32767
movs    r1, #0
movt    r1, #0
str r1, [sp, #16]
str r1, [sp, #12]
ldr r1, [sp, #12]
ldr r2, [sp, #12]
cmp r2, r0
movw    r0, #0
it  gt
movgt   r0, #1
and r0, r0, #1
ldr r2, [sp, #12]
cmn.w   r2, #-2147483648
movw    r2, #0
it  lt
movlt   r2, #1
and r2, r2, #1
mov r3, sp
str r2, [r3, #4]
str r0, [r3]
mov.w   r2, #-2147483648
mvn r3, #-2147483648
movw    r0, :lower16:(L_.str-(LPC0_0+4))
movt    r0, :upper16:(L_.str-(LPC0_0+4))
LPC0_0:
add r0, pc
blx _printf
ldr r1, [sp, #16]
str r0, [sp, #8]
mov r0, r1
add sp, #20
pop {r7, pc}

.section    __TEXT,__cstring,cstring_literals
L_.str:
.asciz   "ival: %li, min: %i, max: %i, too big: %i, too small: %i\n"

.subsections_via_symbols
``````

This is an error. There is no room in the C standard for `too small` to be anything other than 0. Here's how it works:

1. Since `INT_MIN` is an `int`, it gets converted to `long` during the "usual arithmetic conversions". This happens because `long` has higher rank than `int` (and both are signed types). No promotions occur, since all of the operands have at least `int` rank. No undefined or implementation-specified behavior is invoked.

2. During conversion, the value of `INT_MIN` is preserved. Since it is being converted from `int` to `long`, and it is guaranteed that `long` has at least the range of `int`, the value of `INT_MIN` must be preserved during the conversion. No undefined or implementation-specified behavior is invoked. No modular conversions are permitted, those are for unsigned types only.

3. The result of the comparison should be `0`.

There is no wiggle room for sign extension or other such things. Also, since the call to `printf` is correct, there is no problem there.

If you can reproduce it on another system, or send it to someone else who can reproduce it, you should report the bug directly to your toolchain vendor.

Attempts to reproduce the bug: I was not able to reproduce the behavior on any of the following combinations, all both with optimization on and off:

• GCC 4.0, PPC + PPC64
• GCC 4.2, PPC + PPC64
• GCC 4.3, x64
• GCC 4.4, x64
• Clang 3.0, x64

## printing float, preserving precision

14 votes

I am writing a program that prints floating point literals to be used inside another program.

How many digits do I need to print in order to preserve the precision of the original float?

Since a float has `24 * (log(2) / log(10)) = 7.2247199` decimal digits of precision, my initial thought was that printing 8 digits should be enough. But if I'm unlucky, those `0.2247199` get distributed to the left and to the right of the 7 significant digits, so I should probably print 9 decimal digits.

Is my analysis correct? Is 9 decimal digits enough for all cases? Like `printf("%.9g", x);`?

Is there a standard function that converts a float to a string with the minimum number of decimal digits required for that value, in the cases where 7 or 8 are enough, so I don't print unnecessary digits?

Note: I cannot use hexadecimal floating point literals, because standard C++ does not support them.

In order to guarantee that a binary->decimal->binary roundtrip recovers the original binary value, IEEE 754 requires

``````
The original binary value will be preserved by converting to decimal and back again using:[10]

5 decimal digits for binary16
9 decimal digits for binary32
17 decimal digits for binary64
36 decimal digits for binary128

For other binary formats the required number of decimal digits is

1 + ceiling(p*log10(2))

where p is the number of significant bits in the binary format, e.g. 24 bits for binary32.
``````

In C, the functions you can use for these conversions are snprintf() and strtof/strtod/strtold().

Of course, in some cases even more digits can be useful (no, they are not always "noise", depending on the implementation of the decimal conversion routines such as snprintf() ). Consider e.g. printing dyadic fractions.

## In C, when is conditional "test ? : alt" form (empty true case) supported?

13 votes

In gcc, I can write `foo ? : bar` which is a shorthand form of `foo ? foo : bar` but I see that K&R doesn't mention it.

Is this something I should rely on, defined in some standard? Or just an (evil) gcc extension I should avoid?

This is a GCC extension by the name:
Conditionals with Omitted Operands.

It is not standard c.Using `-pedantic` flag for compilation will tell you so.

The middle operand in a conditional expression may be omitted. Then if the first operand is nonzero, its value is the value of the conditional expression.

Therefore, the expression

`    x ? : y`

has the value of x if that is nonzero; otherwise, the value of y.

This example is perfectly equivalent to

`    x ? x : y`

In this simple case, the ability to omit the middle operand is not especially useful. When it becomes useful is when the first operand does, or may (if it is a macro argument), contain a side effect. Then repeating the operand in the middle would perform the side effect twice. Omitting the middle operand uses the value already computed without the undesirable effects of recomputing it.

Is this something I should rely on, defined in some standard? Or just an (evil) gcc extension I should avoid?

Depends on your requirements, If your code does'nt need to run on any other compiler implementation other than GCC then you can use it. However, if your code is to build on across different other compiler implementations then you should not use it.

Anyhow, One should aim to write as much intuitive and readable code as possible given that I would always suggest avoiding such (ugly)constructs.

## The named loop idiom : dangerous?

13 votes

I've read an article about the "Named Loop Idiom" in C++ : http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Named_Loop

This idiom allows us to write things like that :

``````named(outer)
for(int i = 0 ; i < rows ; ++i) {

named(inner)
for(int j = 0 ; j < cols ; ++j) {

if(some_condition)
break(outer);   // exit the 'outer' loop

}
}
``````

Such constructs already exists as core feature in many languages, like Java for instance.

According to the article, it can be implemented in C++ by defining two evil macros :

``````#define named(blockname) goto blockname; \
blockname##_skip: if (0) \
blockname:

#define break(blockname) goto blockname##_skip;
``````

I know that many people would like to banish the use of `goto`. I personally found it helpful in very rare cases, especially when I wanted to `break` a bunch of nested loops. This idiom appears to me as a cleaner solution for that, but is it ok to use it in real code ?

On the discussion page of the article, one can read :

"Do not do this. You'll end up in hell"

So my questions are : What are the drawbacks of using the named loop idiom ? Is it dangerous ? If yes, why ?

Bonus question : is it possible to implement named `continue` similarly ? (I think it's not possible using the `named(...) for(...;...;...) {}` syntax, but who knows ?)

EDIT : I agree with you, redefining a keyword is nasty. What about using `#define breakLoop()` instead?

As covered in the comments, #defining `break` is problematic. Let's assume you use something else.

I'd still argue that this is dangerous. It's an extremely unusual idiom (to C++ programmers), so they're less likely to understand, and thus they might make breaking changes. Given that there are less-surprising--and therefore less-dangerous--ways to accomplish the same thing, I would advise against it.

Consider putting the loops in a function or a lambda. Then you can `return` to break out of the outer loop. As a benefit, you can return information about the premature exit, which may be useful to the outer code.

## gcc removes inline assembler code

13 votes

It seems like gcc 4.6.2 removes code it considers unused from functions.

## test.c

``````int main(void) {
goto exit;
handler:
__asm__ __volatile__("jmp 0x0");
exit:
return 0;
}
``````

## Disassembly of `main()`

``````   0x08048404 <+0>:     push   ebp
0x08048405 <+1>:     mov    ebp,esp
0x08048407 <+3>:     nop    # <-- This is all whats left of my jmp.
0x08048408 <+4>:     mov    eax,0x0
0x0804840d <+9>:     pop    ebp
0x0804840e <+10>:    ret
``````

## Compiler options

No optimizations enabled, just `gcc -m32 -o test test.c` (`-m32` because I'm on a 64 bit machine).

How can I stop this behavior?

Edit: Preferably by using compiler options, not by modifing the code.

Update 2012/6/18

Just thinking about it, one can put the `goto exit` in an asm block, which means that only 1 line of code needs to change:

``````int main(void) {
__asm__ ("jmp exit");

handler:
__asm__ __volatile__("jmp \$0x0");
exit:
return 0;
}
``````

That is significantly cleaner than my other solution below (and possibly nicer than @ugoren's current one too).

This is pretty hacky, but it seems to work: hide the handler in a conditional that can never be followed under normal conditions, but stop it from being eliminated by stopping the compiler from being able to do its analysis properly with some inline assembler.

``````int main (void) {
int x = 0;
__asm__ __volatile__ ("" : "=r"(x));
// compiler can't tell what the value of x is now, but it's always 0

if (x) {
handler:
__asm__ __volatile__ ("jmp \$0x0");
}

return 0;
}
``````

Even with `-O3` the `jmp` is preserved:

``````    testl   %eax, %eax
je      .L2
.L3:
jmp \$0x0
.L2:
xorl    %eax, %eax
ret
``````

(This seems really dodgy, so I hope there is a better way to do this. edit just putting a `volatile` in front of `x` works so one doesn't need to do the inline asm trickery.)

## Adding two numbers without using +

12 votes

I have this code which does the trick.

``````#include< stdio.h >
int main()
{
int a=30000,b=20,sum;
char *p;
p=(char *)a;
sum= (int)&p[b]; //adding a & b
printf("%d",sum);
return 0;
}
``````

Can someone please explain what is happening in the code?

``````       p=(char *)a;
sum= (int)&p[b]; //adding a & b
``````

I think it is worth adding to the other answers a quick explanation of pointers, arrays and memory locations in c.

Firstly arrays in c are just a block of memory big enough to hold the number of items in the array (see http://www.cplusplus.com/doc/tutorial/arrays/)

so if we said

``````int[5] example;
example[0] = 1;
example[1] = 2;
example[2] = 3;
example[3] = 4;
example[4] = 5;
``````

Assuming int is 32 bits we would have a block of memory 5*32bits = 160bits long. As C is a low level language it tries to be as efficient as possible, therefor stores the least amount of information about arrays as possible, in this case the least amount possible is the memory address of the first element. So the type of example could be expressed as

``````int *example;
``````

Or example points to an int. To get the items in the array you then add the correct number to the address stored in example and read the number at that memory address. If we assumed memory look like

``````Memory Address = Value (ints take up 4 bytes of space)
1000 = 1          <-- example
1004 = 2
1008 = 3
1012 = 4
1016 = 5
``````

So

``````int i = example[3];  //The 4th element
``````

could be expressed as

``````int i = *(example + 3 * sizeof(int));
int i = *(example + 3 * 4);
int i = *(1000 + 12);
int i = *(1012); // Fetch the value at memory location 1012
int i = 4;
``````

The sizeof(int) is 4 (int is 32 bits, or 4 * 8 bit bytes). If you where trying to do addition you would want a char which is 8 bits or 1 * 8 bit bytes.

So back to you code

``````char* p;       // declare p as a pointer to a char/
p = (char *)a; // point p at memory location 3000
// p[b] would be the 21st element of the "array" p =>
// p[20]  =>
// p + 20 * sizeof(char) =>
// p + 20 * 1 =>
// p + 20  =>
// 3000 + 20 =>
// 3020
// the & operator in c gets the address of the variable so
sum = (int) &p[b];
// &p[b] => find the address pointed to by p[b] => 3020
// (int) casts this pointer to a int.
``````

So sum is assigned the address of the 21st element of the array.

Long winded explanation.

## Watch a memory range in gdb?

8 votes

I am debugging a program in gdb and I want the program to stop when the memory region 0x08049000 to 0x0804a000 is accessed. When I try to set memory breakpoints manually, gdb does not seem to support more than two locations at a time.

``````(gdb) awatch *0x08049000
Hardware access (read/write) watchpoint 1: *0x08049000
(gdb) awatch *0x08049001
Hardware access (read/write) watchpoint 2: *0x08049001
(gdb) awatch *0x08049002
Hardware access (read/write) watchpoint 3: *0x08049002
(gdb) run
Starting program: /home/iblue/git/some-code/some-executable
Warning:
Could not insert hardware watchpoint 3.
Could not insert hardware breakpoints:
You may have requested too many hardware breakpoints/watchpoints.
``````

There is already a question where this has been asked and the answer was, that it may be possible to do this with valgrind. Unfortunately the answer does not contain any examples or reference to the valgrind manual, so it was not very enlightning: How can gdb be used to watch for any changes in an entire region of memory?

So: How can I watch the whole memory region?

If you use GDB 7.4 together with Valgrind 3.7.0, then you have unlimited "emulated" hardware watchpoints.

Start your program under Valgrind, giving the arguments `--vgdb=full --vgdb-error=0` then use GDB to connect to it (`target remote | vgdb`). Then you can e.g. `watch` or `awatch` or `rwatch` a memory range by doing `rwatch (char[100]) *0x5180040`

See the Valgrind user manual on gdb integration for more details

## Is kernel/sched.c/context_switch() guaranteed to be invoked every time a process is switched in?

8 votes

I want to alter the Linux kernel so that every time the current PID changes - i.e., a new process is switched in - some diagnostic code is executed (detailed explanation below, if curious). I did some digging around, and it seems that every time the scheduler chooses a new process, the function `context_switch()` is called, which makes sense (this is just from a cursory analysis of `sched.c/schedule()` ).

The problem is, the Linux scheduler is basically black magic to me right now, so I'd like to know if that assumption is correct. Is it guaranteed that, every time a new process is selected to get some time on the CPU, the context_switch() function is called? Or are there other places in the kernel source where scheduling could be handled in other situations? (Or am I totally misunderstanding all this?)

To give some context, I'm working with the MARSS x86 simulator trying to do some instrumentation and measurement of certain programs. The problem is that my instrumentation needs to know which executing process certain code events correspond to, in order to avoid misinterpreting the data. The idea is to use some built-in message passing systems in MARSS to pass the PID of the new process on every context switch, so it always knows what PID is currently in execution. If anyone can think of a simpler way to accomplish that, that would also be greatly appreciated.

Yes, you are correct.

The `schedule()` will call `context_switch()` which is responsible for switching from one task to another when the new process has been selected by `schedule()`.

`context_switch()` basically does two things. It calls `switch_mm()` and `switch_to()`.

`switch_mm()` - switch to the virtual memory mapping for the new process

`switch_to()` - switch the processor state from the previous process to the new process (save/restore registers, stack info and other architecture specific things)

As for your approach, I guess it's fine. It's important to keep things nice and clean when working with the kernel, and try to keep it relatively easy until you gain more knowledge.

## In C, tan(30) gives me a negative value! Why?

8 votes

I observe that my `tan(float)` function from the `cmath` library is returning a negative value.

The following piece of code, when run :

``````    #include <cmath>
....

// some calculation here gives me a value between 0.0 to 1.0.
float tempSpeed = 0.5;

float tanValue = tan(tempSpeed * 60);

__android_log_print(ANDROID_LOG_INFO, "Log Me", "speed: %f", tanValue);
``````

Gives me this result in my Log file:

``````    Log Me: speed `-6.4053311966`
``````

As far as I remember

``````    tan(0.5*60) = tan(30) = 1/underoot(3);
``````

Can someone help me here as in why I am seeing a negative value? Is it related to some floating point size error? Or am I doing something really dumb?

In C, `tan` and other trigonometric functions expect radians as their arguments, not degrees. You can convert degrees to radians:

``````tan( 30. * M_PI / 180. ) == 0.57735026918962576450914878050196
``````

## Learning C for Objective-C

7 votes

I am relatively proficient in Objective-C but I have been looking around some frameworks and libraries I might use in the future and I am increasingly seeing the use of C. So far the only applications I have written contain only Objective-C. I know Objective-C is a superset of C, but what I mean when I say that I have only written in Objective-C is that I have only used Objective-C methods and the syntax of Objective-C that is distinctly different from C syntax. I've been going through questions related to the relationship between C and Objective-C (see links below) and I want to start learning C, but apparently there are three types of C (K&R, C89, and C99), and I am wondering which type I should learn to help me with Objective-C. I know from learning Objective-C I unknowingly learned C too, but I want to understand the ins and outs of C more and become familiar with its functions, syntax, features, etc.

Thanks.

Moving from C to Objective-C?

Objective-C and its relation to C

Objective-C Programming: Will Learning C and/or Smalltalk Help?

----------EDIT------------

Also, is Objective-C based off of any one of the three types of C?

There are even more types of C. In the meantime C0X and C11 were defined... They all make only small evolutionary steps from their predecessors, so you shouldn't worry much about it. Objective-C is based on C99 (minus floating point pragmas, actually), so for now that would probably be the best fit.

It's not entirely clear from your question, but you do notice that those variations of C are just evolving specifications from different years? K&R from ca. 1978, C89 from 1989, C99 from 1999 etc... Objective-C was designed to be a strict superset of C, so you can probably expect Objective-C to incorporate C11 features some day.

(NB: several edits to include information from the comments)

## The address in Kernel

7 votes

I have a question when I located the address in kernel. I insert a hello module in kernel, in this module, I put these things:

``````char mystring[]="this is my address";
printk("<1>The address of mystring is %p",virt_to_phys(mystring));
``````

I think I can get the physical address of mystring, but what I found is, in syslog, the printed address of it is 0x38dd0000. However, I dumped the memory and found the real address of it is dcd2a000, which is quite different from the former one. How to explain this? I did something wrong? Thanks

PS: I used a tool to dump the whole memory, physical addresses.

According to the Man page of VIRT_TO_PHYS

The returned physical address is the physical (CPU) mapping for the memory address given. It is only valid to use this function on addresses directly mapped or allocated via kmalloc.

This function does not give bus mappings for DMA transfers. In almost all conceivable cases a device driver should not be using this function

Try allocating the memory for `mystring` using `kmalloc` first;

``````char *mystring = kmalloc(19, GFP_KERNEL);
strcpy(mystring, "this is my address"); //use kernel implementation of strcpy
printk("<1>The address of mystring is %p", virt_to_phys(mystring));
kfree(mystring);
``````

Here is an implementation of strcpy found here:

``````char *strcpy(char *dest, const char *src)
{
char *tmp = dest;

while ((*dest++ = *src++) != '\0')
/* nothing */;
return tmp;
}
``````

## Can I get the C++ preprocessor to send output during compilation?

6 votes

I have been debugging a particularly insidious bug which I now believe to be caused by unexpected changes which stem from different behavior when different headers are included (or not).

This is not exactly the structure of my code but let's just take a look at this scenario:

``````#include "Newly_created_header_which_accidentally_undefines_SOME_DEFINE.h"

// ...

#ifdef SOME_DEFINE
code_which_i_believe_i_am_always_running();
#else
code_which_fails_which_i_have_forgotten_about(); // runtime error stack traces back here, but I don't know this... or maybe it's some strange linker error
#endif
``````

I search through my git commits and narrow down the cause of the bug, compiling and running my code countless times, only to find after several hours that the only difference required for causing the bug is the inclusion of what appears to be a completely benign and unrelated header.

Perhaps this is a great argument for why the preprocessor basically just sucks.

But I like it. The preprocessor is cool because it lets us make shortcuts. It's only that some of these shortcuts, when not used carefully, bite us in the butt pretty hard.

So at this juncture it would have helped if I could use a directive like `#echo "Running old crashy code"` where I'll be able to see this during compilation so I could be tipped off immediately to start investigating why SOME_DEFINE was not defined.

As far as I know the straightforward way of determining if SOME_DEFINE is defined is to do something like

``````#ifndef SOME_DEFINE
printf("SOME_DEFINE not defined!!\n");
``````

This will surely get the job done but there is no good reason for this task to be performed at runtime because it is entirely determined at compile-time. This is simply something I'd like to see at compile-time.

That being said, in this situation, using the print (or log or even throwing an exception) may be an acceptable thing to do because I won't really care about slowing down or cluttering up the questionable code. But that doesn't apply if I have for instance two code paths both of which are important, and I just want to know at compile-time which one is being activated. I'd have to worry about running the code that does the preprocessor-conditioned print at the beginning of the program.

This is really just a long-winded way of asking the question, "Can I echo a string to the output during compilation by using a preprocessor directive?"

Answer more in line with what I was looking for is here: http://stackoverflow.com/a/3826876/340947

Sorry @sarnold