## Does the Retina display eliminate the need for anti-aliasing?

With the iPhone 4, the Retina display's resolution is so high that most people cannot distinguish the pixels from one another (supposedly). If this is the case, do apps that support the Retina display still need anti-aliasing to make fonts and images smooth, or is this no longer necessary?

Edit: I'm interested in more detailed information. Started a bounty.

There's no question at all - you still to do need antialiasing mathematics, because of the complexity of curves, second order curves, intersecting curves, and different types of joins. Sure, straight lines (perhaps at 45 degrees) may conceivably test as well in A/B tests. But just look at a shallower line or a changing differential.

And wait - there's a knock-down argument here............

don't forget that you can display typography really, really small on a retina display!!

One could say that you need antialiasing, whenever letter are less than (let's say) 50 pixels high. Thus if you had a crappy 10 dot per inch display ... but the letters were 80 feet high (8000 pixels high) you would NOT need antialiasing. We've just proved you "don't need" antialiasing on a 10 ppi display.

Conversely, let's say Steve's next display has 1000 pixels per inch. You would STILL need antialiasing for very small type -- and any very small detail -- that is 50 pixels or less!

Furthermore: don't forget that the detail in type .. in a vector image is infinite.

You might be saying, oh the "body" of a baskerville "M" looks fine with no antialiasing, on a retina display. Well, what about the curves of the serifs? What about the chipping on the ends of the serifs? And so on down the line.

Another way to look at it: ok, on your typical Mac display, you don't need antialiasing on flat lines, or maybe 45degree lines. further, on a retina display you can get away with no atialiasing on maybe 22.5 lines, and even 12.25degree lines.

But so what? If you add antialiasing, on a retina display, you can successfully draw ridiculously shallow lines, much shallower than on say a current Mac display.

Once again as in the previous example, say the next iPhone has zillion pixels per inch. Still, adding antialiasing will let you have EVEN SHALLOWER good-looking lines -- by definition, yes, it will always make it look better because it will always improve detail.

Note that the "eye resolution" business from the magazine articles is total and complete nonsense.

Even on say 50 dpi displays, you're only seeing a fuzzy amalgam created by the mathematics of the pixel display strategy.

If you don't believe this is so, look at this writing right now on your Mac, and count the pixels in the letter "r". Of course, it's inconceivable you could do that. You could maybe "resolve" pixels on a 10 dpi display. What matters is the mathematics of the fuzz created by the display strategy.

Antialiasing always creates "better fuzz" as it were. If you have more pixels to begin with, antialiasing just gives even better again fuzz. Again, simply put under consideration even smaller features, and of course you'd want to antialias them.

That seems to be the state of affairs!

## When to use NSInteger vs int?

When should I be using NSInteger vs int when developing for iOS? I see in the apple sample code they use NSInteger (or NSUInteger) when passing a value as an argument to a function or returning a value from a function.

``````- (NSInteger)someFunc;...
- (void)someFuncWithInt:(NSInteger)value;...
``````

But w/in a function they're just using int to track a value

``````for (int i; i < something; i++)
...

int something;
something += somethingElseThatsAnInt;
...
``````

I've read (been told) that NSInteger is a safe way to reference an integer in either a 64-bit or 32-bit environment so why use int at all?

You usually want to use `NSInteger` when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible `int` type, which on 32 bit systems is just an `int`, while on a 64-bit system it's a `long`.

I'd stick with using `NSInteger` instead of `int`/`long` unless you specifically require them.

`NSInteger`/`NSUInteger` are defined as *dynamic `typedef`*s to one of these types, and they are defined like this:

``````#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
``````

I want to allow users of an iPhone app to upload photos and use Amazon S3. There are 2 ways I see going about this:

1. Upload from iPhone to my server, which proxies it then to Amazon S3.
2. Upload from iPhone direct to S3

For option 1, the security is straightforward. I don't ever have to tell the iPhone my S3 secret. Downside is that everything is proxied through our server for uploads which sort of defeats the purpose of going to S3.

For option 2, in theory it's better but the issue is how do you enable the iPhone (or any mobile app on a different platform) to write into my S3 bucket without giving it my secret? Additionally, I'm not sure if this is a good design or not because the flow would be: iphone uploads to S3, gets the URL, then tells the server what the URL is so it can add it to our database to reference in the future. However, since the communication is separated into 2 legs (iphone->S3 vs iPhone->My-Server) it leaves it fragile as a non-atomic operation.

I've found some older info that references using Browser-based Uploads using POST but unsure if that is still the recommended approach. I'm hoping for a better solution where we can just use the REST APIs rather than relying on POST. I've also see the AWS iOS Beta SDK, but their docs didn't help much and I found an Amazon article that was equally unhelpful as it cautions you on what not to do, but doesn't tell you an alternative approach:

The mobile AWS SDKs sign the API requests sent to Amazon Web Services (AWS) in order to validate the identity of the AWS account making the request. Otherwise, a malicious developer could easily make requests to another developer's infrastructure. The requests are signed using an AWS Access Key ID and a Secret Access Key that AWS provides. The Secret Access Key is similar to a password, and it is extremely important to keep secret.

Tip: You can view all your AWS security credentials, including Access Key ID and Secret Access Key, on the AWS web site at http://aws.amazon.com/security-credentials.

Embedding credentials in source code is problematic for software, including mobile applications, because malicious users can de-compile the software or view the source code to retrieve the Secret Access Key.

Does anyone have any advice on the best architectural design and flow for this?

Update: The more I dig into this, it seems that a bunch of pople recommend using the HTTP POST method with the json policy file as described here: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingHTTPPOST.html.

With this, the flow would be something like (1) iPhone makes request to my server, asking for policy file (2) server generates json policy file and gives back to client (3) iPhone does HTTP POST of photo + json policy to S3. I hate that I'm using HTTP POST in an apparently kludgy way but it appears to be better because it removes the need for my server to store the photo at all.

I've discussed this issue on the AWS forums before. As I say there, the proper solution for accessing AWS from a mobile device is to use the AWS Identity and Access Management service to generate temporary, limited-privilege access keys for each user. The service is great, but it's still in beta for now and it's not part of the mobile SDK yet. I have a feeling once this thing is released for good, you'll see it out on the mobile SDK immediately afterwards.

Until then, generate presigned URLs for your users, or proxy through your own server like some others have suggested. The presigned URL will allow your users to temporarily GET or PUT to an S3 object in one of your buckets without actually having your credentials (they are hashed into the signature). You can read about the details here.

EDIT: I've implemented a proper solution for this problem, using the preview beta of IAM. It's available on GitHub, and you can read about it here.

## How to draw a "speech bubble" on an iPhone?

Hi there.

I'm trying to get a "speech bubble" effect similar to the one in Mac OS X when you right click on something in the dock. Here's what I have now:

I need to get the "triangle" part of the lower portion. Is there any way I can draw something like that and get a border around it? This will be for an iPhone app.

EDIT: Many thanks to Brad Larson, here's what it looks like now:

I've actually drawn this exact shape before (rounded rectangle with a pointing triangle at the bottom). The Quartz drawing code that I used is as follows:

``````CGRect currentFrame = self.bounds;

CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineWidth(context, strokeWidth);
CGContextSetStrokeColorWithColor(context, [MyPopupLayer popupBorderColor]);
CGContextSetFillColorWithColor(context, [MyPopupLayer popupBackgroundColor]);

// Draw and fill the bubble
CGContextBeginPath(context);
CGContextMoveToPoint(context, borderRadius + strokeWidth + 0.5f, strokeWidth + HEIGHTOFPOPUPTRIANGLE + 0.5f);
CGContextAddLineToPoint(context, round(currentFrame.size.width / 2.0f - WIDTHOFPOPUPTRIANGLE / 2.0f) + 0.5f, HEIGHTOFPOPUPTRIANGLE + strokeWidth + 0.5f);
CGContextAddLineToPoint(context, round(currentFrame.size.width / 2.0f) + 0.5f, strokeWidth + 0.5f);
CGContextAddLineToPoint(context, round(currentFrame.size.width / 2.0f + WIDTHOFPOPUPTRIANGLE / 2.0f) + 0.5f, HEIGHTOFPOPUPTRIANGLE + strokeWidth + 0.5f);
CGContextAddArcToPoint(context, currentFrame.size.width - strokeWidth - 0.5f, strokeWidth + HEIGHTOFPOPUPTRIANGLE + 0.5f, currentFrame.size.width - strokeWidth - 0.5f, currentFrame.size.height - strokeWidth - 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, currentFrame.size.width - strokeWidth - 0.5f, currentFrame.size.height - strokeWidth - 0.5f, round(currentFrame.size.width / 2.0f + WIDTHOFPOPUPTRIANGLE / 2.0f) - strokeWidth + 0.5f, currentFrame.size.height - strokeWidth - 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, strokeWidth + 0.5f, currentFrame.size.height - strokeWidth - 0.5f, strokeWidth + 0.5f, HEIGHTOFPOPUPTRIANGLE + strokeWidth + 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, strokeWidth + 0.5f, strokeWidth + HEIGHTOFPOPUPTRIANGLE + 0.5f, currentFrame.size.width - strokeWidth - 0.5f, HEIGHTOFPOPUPTRIANGLE + strokeWidth + 0.5f, borderRadius - strokeWidth);
CGContextClosePath(context);
CGContextDrawPath(context, kCGPathFillStroke);

// Draw a clipping path for the fill
CGContextBeginPath(context);
CGContextMoveToPoint(context, borderRadius + strokeWidth + 0.5f, round((currentFrame.size.height + HEIGHTOFPOPUPTRIANGLE) * 0.50f) + 0.5f);
CGContextAddArcToPoint(context, currentFrame.size.width - strokeWidth - 0.5f, round((currentFrame.size.height + HEIGHTOFPOPUPTRIANGLE) * 0.50f) + 0.5f, currentFrame.size.width - strokeWidth - 0.5f, currentFrame.size.height - strokeWidth - 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, currentFrame.size.width - strokeWidth - 0.5f, currentFrame.size.height - strokeWidth - 0.5f, round(currentFrame.size.width / 2.0f + WIDTHOFPOPUPTRIANGLE / 2.0f) - strokeWidth + 0.5f, currentFrame.size.height - strokeWidth - 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, strokeWidth + 0.5f, currentFrame.size.height - strokeWidth - 0.5f, strokeWidth + 0.5f, HEIGHTOFPOPUPTRIANGLE + strokeWidth + 0.5f, borderRadius - strokeWidth);
CGContextAddArcToPoint(context, strokeWidth + 0.5f, round((currentFrame.size.height + HEIGHTOFPOPUPTRIANGLE) * 0.50f) + 0.5f, currentFrame.size.width - strokeWidth - 0.5f, round((currentFrame.size.height + HEIGHTOFPOPUPTRIANGLE) * 0.50f) + 0.5f, borderRadius - strokeWidth);
CGContextClosePath(context);
CGContextClip(context);
``````

The clipping path at the end can be left out if you're not going to use a gradient or some other more fill that's more complex than a simple color.

## How to print in iOS 4.2?

Hi,

I want to integrate the print functionality in my app.

The document I want to print will be in .doc or .txt format. I am not very experienced in iPhone development yet, so finding it difficult to implement it by following the Apple documentation.

If someone could help me by posting some sample code, will be a great help.

-iPhoneDev

Check out the Drawing and Printing Guide for iOS -- I linked to the printing section. There's sample code and good links to more sample code there.

Edit: I see now that you indicate you find the documentation difficult to follow.

Word documents are complicated -- you'll need to parse through the data, which is quite difficult.

Text and HTML are easier. I took Apple's example for HTML and changed it for plain text:

``````- (IBAction)printContent:(id)sender {
UIPrintInteractionController *pic = [UIPrintInteractionController sharedPrintController];
pic.delegate = self;

UIPrintInfo *printInfo = [UIPrintInfo printInfo];
printInfo.outputType = UIPrintInfoOutputGeneral;
printInfo.jobName = self.documentName;
pic.printInfo = printInfo;

UISimpleTextPrintFormatter *textFormatter = [[UISimpleTextPrintFormatter alloc]
initWithText:yourNSStringWithContextOfTextFileHere];
textFormatter.startPage = 0;
textFormatter.contentInsets = UIEdgeInsetsMake(72.0, 72.0, 72.0, 72.0); // 1 inch margins
textFormatter.maximumContentWidth = 6 * 72.0;
pic.printFormatter = textFormatter;
[textFormatter release];
pic.showsPageRange = YES;

void (^completionHandler)(UIPrintInteractionController *, BOOL, NSError *) =
^(UIPrintInteractionController *printController, BOOL completed, NSError *error) {
if (!completed && error) {
NSLog(@"Printing could not complete because of error: %@", error);
}
};
[pic presentFromBarButtonItem:sender animated:YES completionHandler:completionHandler];
} else {
[pic presentAnimated:YES completionHandler:completionHandler];
}
}
``````

## Mobile developer interview questions, that a non-mobile developer can ask

I need to interview some people for a mobile developer position (iphone) soon. The problem is that my strength is in Java web development.

What questions should i ask without sounding like an idiot? Also, what are valid answers to these questions?

Hi MKoryak. If it was me, I would ask them...

are they completely familiar with these TEN KEY POINTS:

• XCode (and ideally it's debugging tools)
• Interface Builder
• submitting apps to the app store, everything that involves (certs, blah blah)
• in objective-C, using properties inside out
• in objective-C, using delegates inside out
• networking with ASIHttpRequest, AsyncSockets, GameKit, Bonjour
• total understanding of subclassing
• basics like CoreAnimation and CoreData
• "all the usual interfaces" on iOS like UITableView, etc etc etc etc
• utterly everything, from top to bottom, about memory management

I think that's a good starter list. (If I've forgotten anything obvious, it will soon be suggested.)

Note that item 10, memory management, is the critical item. You just can't build finished working production mobile device apps unless you are a memory expert on your platform. Furthermore someone who's really good at iPhone memory management is usually good at everything else on the iPhone. If I could only ask one thing that's it!

There are also a dozen (more?) little things you just have to have absolutely down pat to develop for iPhone - for example "preferences," "accelerometer," "icons and splash screens," "playing sounds," and so on and on. You have to be able to do all those in five minutes, not five days of investigation, in production. It's pretty tough really. Someone could probably list all these "minor must-haves".

A perhaps separate somewhat specialist issue is OpenGL. Depending on what you're payin' them and what you need, you may demand someone who is, furthermore, an OpenGL expert.

Is your company's field games development? If so, it is perfectly likely that, furthermore, as a "total" iPhone games developer, you may need someone who is, also, already completely expert with

• Unity3D (for 3D etc)
• the popular physics (2D) packages (eg chipmunk, etc)
• one way or another, the server side of client-server systems

So that's that. A question is - what SPECIFICALLY are you going to be doing (in general terms)? ie, scientific computing, game development, marketing apps to get rich, in-house catalogs, hand-held clients, or?? If you tell us we can tell you what they need.

And finally overwhelmingly -- you would have to be able to see 3+ actual apps that they have done. With the iPhone, you really need to be able to "bring it home", writing good code snippets is not enough, you know. It's tough.

Here's the "stuff we forgot in the ten critical points" list beginning already!

• Matt points out, they should be comfortable with "MVC" which stands for model-view-controller thinking. (This is kind of a fascist cult within the iOS world - we all adhere! We can't tell you about it until you are one of us. If their face lights up when you mention MVC, you're all set. If they get dark and uncomfortable looking, move on...)

• David and Brad point out that - perhaps unlike other programming fields - iPhone and Mac programmers almost always have to have a sense of the interface. You just have to have a feel for that clean iPhone interface, you have to know how to layout any particular problem on the iPhone using the iOS elements that add up to the iPhone user experience. Make sure they know what HIG stands for.

## Customize iphone app for different clients

I have an existing app that needs to be compiled for different clients

Each client requires their own icon and splash screen.
I would also like to be able to conditionally include various features depending whether the particular client requires them or not.

I have tried setting up different targets for each client, but not having much luck so far.
The different resources with the same name, but a different path keep getting mixed up.

Ideally I would like to be able to build an app by duplicating another client that is similar and then just make the minimum number of changes to create the app for the new client.

What is the best way to set this app up?

Separate targets for each client should be the way to go. For the features, I would suggest first setting up a macro identifying the client in the target settings (under "Preprocessor Macros" on the build tab), then having a FeatureDefines.h file that looks like this:

``````#ifdef macroClientA // assume client A wants features 1 and 3
#  define macroFeature1
#  define macroFeature3
#endif

// and similarly for the other clients
``````

Now you can use

``````#import featureDefines
#ifdef macroFeature1
``````

any place you need to test if feature 1 is desired or not.

For the separate icons, your target settings can specify a different info.plist file for each client, and those files can in turn specify a different filename for the icon.

For the separate splash screens, iOS always requires the splash screen to be named Default.png, but they can go in different subdirectories of your project directory. You can control which one is used for which target by right clicking where Xcode says "Groups & Files", selecting Target Membership, then checking the checkbox for the one you want to use, and making sure the other ones are unchecked.

For resources, I would suggest naming your resource files like this:

resourceName.ext // generic resource to be used if there is no client-specific one
resourceName-clientName.ext // client-specific resource

Next set up a general resource-finder method that looks something like this:

``````-(NSString *) resourcePathForResourceName: (NSString *) resourceName extension: (NSString *) ext {
NSString *clientName;
#ifdef macroClientA
clientName = @"clientA";
#endif // and similarly for the other clients
NSString *clientSpecificName = [NSString stringWithFormat: @"%@-%@.%@", resourceName, clientName, ext];
NSString *genericName = [NSString stringWithFormat: @"%@.%@"];
if ([[NSFileManager defaultManager] fileExistsAtPath: clientSpecificName])
return clientSpecificName;
else if ([[NSFileManager defaultManager] fileExistsAtPath: genericName])
return genericName;
else
// handle the error
}
``````

Running all your resource file grabs through that method will allow you to add client-specific resources to your project without changing a single line of code.

## Purpose of @ Symbol Before Strings?

I've been using Objective-C for a while now, but have never really understood what the purpose of the @ symbol before all strings is. For instance, why do you have to declare a string like this:

``````NSString *string = @"This is a string";
``````

and not like this:

``````NSString *anotherString = "This is another string";
``````

as you do in Java or so many other programming languages. Is there a good reason?

It denotes a NSString (rather than a standard C string)

an NSString is an Object that stores a unicode string and provides a bunch of method to assist with manipulating.

a C string is just a \0 terminated bunch of characters (bytes).

EDIT: and the good reason is that Objective-C builds on top of C, the C language constructs need to be still available. @"" is an objective-c only extension.

## Out-Of-Memory while doing Core Data migration

Hello,

I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed.

Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9

However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique.

How should I handle a migration with very large objects ?

I'm guessing that you have a bunch of changes you want to make in addition to pulling the data out of blobs. My suggestion is to do the migration in a few stages. I'm kind of thinking out loud here, so it might be possible to improve on this. This requires you to be using SQLite.

To make this work, you're going to have three versions of your model:

1. The original model
2. The model with the attribute removed (and possibly with a special unique ID added--see below)
3. The model with all of the changes you've made, including the addition of the new entity and relationships replacing the attribute

The reason to do this is that the transition from version 1 to 2 should be doable with an automatic lightweight migration. In that case Core Data doesn't need to load anything into memory--it just issues SQL statements to make the changes directly on the database.

So, you start by setting up your persistent store coordinator using the old model version. Once you've loaded the data, go through all of the objects you're migrating, extract the binary attribute, and write it to disk somehow. You can use a fetch request with batching and regular autorelease pool draining to make sure you don't use up too much memory for temporary objects. Store the data into the directory you get with NSCachesDirectory. You'll obviously want to store the data in a way that lets you relate it back to the object's managedObjectID.

Then, you shut everything down and ask Core Data to migrate the store from version 1 to version 2. See this link for details. Open up the store with version 2.

You might have to add a step where you assign some sort of unique ID to each object, because I'm not sure if Core Data maintains object IDs when it does a non-lightweight migration. If you need to do this, your version 2 model would add a new attribute to the object you're taking the binary data out of that would be either optional or have a default value set. Since lightweight migration shouldn't change the managedObjectIDs, you could at save the mapping of your new unique ID to the managedObjectIDs you saved along with the binary data two paragraphs ago.

Save the data and close the store.

Open the store and do a migration from version 2 to version 3, which should basically be the code you already had written before you posted the question. Once the store is open, add all of the objects you saved from the version 1 store and set up the relationships using the data you saved along the way.

Simple, right?

## Percentage users still on iOS 3.x? Should I bother?

I know its been asked/answered before, but everything I look at is from back in July, or otherwise out of date.

Should I bother making my app compatible with iOS 3.x (probably 3.1.2 and up)? Means extra testing some coding changes, etc, etc.

Or are enough users on iOS 4.x that I don't need to worry about it.

If there are any sites that keep up to date (daily, weekly, even monthly) stats, please post.

DO NOT BOTHER.

You will find no difference in sales. It's a 4 world as of xmas 2010.

## How to wait until location is completely found? (Core Location)

Hello.

I have a problem within my app. I'm trying to find the user's location to the best preciseness in order to determine their zip-code. Currently I have a button that, when pressed, starts a method named `locateMe`.

``````-(IBAction)locateMe; {
self.locationManager = [[CLLocationManager alloc] init];
locationManager.delegate = self;
locationManager.desiredAccuracy = kCLLocationAccuracyBest;
[locationManager startUpdatingLocation];
``````

Then I've implemented `didUpdateToLocation:`

``````-(void)locationManager:(CLLocationManager *)manager
didUpdateToLocation:(CLLocation *)newLocation
fromLocation:(CLLocation *)oldLocation; {

NSLog(@"Found location! %f,%f",newLocation.coordinate.latitude,newLocation.coordinate.longitude);
}
``````

I had previously done much more complicated stuff in `didUpdateToLocation` but as I tested some things I realized that the first location it found was not precise in the least. So, I put the `NSLog` call in there and it gave me an output similar to below...

``````Found location! 39.594093,-98.614834
Found location! 39.601372,-98.592171
Found location! 39.601372,-98.592171
Found location! 39.611444,-98.538196
Found location! 39.611444,-98.538196
``````

As you can see, it first gives me a value which is not correct, which was causing problems within my app because it wasn't giving the correct location.

So, here's my question. Is there any way I can wait for the location manager to finish finding the most precise location?

EDIT: I'm wanting something like this:

``````if (newLocation.horizontalAccuracy <= locationManager.desiredAccuracy) {
}
``````

But it never gets called!

Core Location often call didUpdateToLocation with location detected durning previous session. So you can just skip first location that it send to you. And if you're calculating user speed based on this data you should to be aware of that behavior. Pedestrians at 100mph is usual in that case. :)

If you're submitting comments or photos with geo coordinates - start receiving geo coordinates when user did enter to write a comment screen. while he will type a message - detected location become pretty accurate.

## Difference between iPhone Simulator and Android Emulator

What is the difference between iPhone Simulator and Android emulator? I have heard people saying that Emulator really emulates the target device which is not true in case of simulator.

They say Android emulator mimics the processing speed of the target device, the memory usage.

Disclaimer: I'm only an iPhone developer, not an Android developer.

You are correct, the difference between emulators and simulators is that emulators mimic the software and hardware environments found on actual devices. Simulators, on the other hand, only mimic the software environment.

Apple always harps on the importance of device testing because iPhone Simulator does not emulate an iPhone processor, disk drive, memory constraints and whatnot. You hardly ever get memory warnings unless your Mac is struggling to manage resources itself, unless you simulate (again) memory warnings from the Simulator's menu item.

In fact, if you go to Settings > General > About, you'll see that the Simulator's disk capacity is the same as the filesystem of the Mac it's installed on:

## iPhone: How to get local currency symbol (i.e. "\$" unstead of "AU\$")

Here's a code of how I get currency symbol now:

``````NSLocale *lcl = [[[NSLocale alloc] initWithLocaleIdentifier:@"au_AU"] autorelease];
NSNumberFormatter *fmtr = [[[NSNumberFormatter alloc] init] autorelease];
[fmtr setNumberStyle:NSNumberFormatterCurrencyStyle];
[fmtr setLocale:lcl];

NSLog( @"%@", [lcl displayNameForKey:NSLocaleCurrencySymbol value:@"AUD"] );
NSLog( @"%@", [fmtr currencySymbol] );
``````

Both NSLogs return "AU\$". As I understood from Apple development documentation, there are at least two currency symbols for each currency (these symbols could be the same, though) - local (that is used within a country. \$ for Australia, for example) and international (AU\$ for Australia). So, the question is how to get LOCAL currency symbol. Any ideas?

It's not ideal in that it's not coming out of the system, but obviously you could create your own internal table using a list of current currency symbols*. Since that list has the unicode symbols for it it would simply be a matter of matching up the Apple list of locales with the list.

Y'know, just in case the Apple-provided ones aren't actually accessible.

## How to make something like iPhone Folders?

Hi there!

I'm wanting to know if there's a way I can transform my view to look something like iPhone folders. In other words, I want my view to split somewhere in the middle and reveal a view underneath it. Is this possible?

EDIT: Per the suggestion below, I could take a screenshot of my application by doing this:

``````UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
``````

Not sure what to do with this, however.

EDIT:2 I've figured out how to add some shadows to my view, and here's what I've achieved (cropped to show relevant part):

the basic thought will be to take a picture of your current state and split it somewhere. Then animate both parts by setting a new frame. I don't know how to take a screenshot programmatically so I can't provide sample code…

EDIT: hey hey it's not looking great but it works ^^

``````// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

CGRect fstRect = CGRectMake(0, 0, 320, 200);
CGRect sndRect = CGRectMake(0, 200, 320, 260); // was 0,200,320,280

CGImageRef fImageRef = CGImageCreateWithImageInRect([f CGImage], fstRect);
UIImage *fCroppedImage = [UIImage imageWithCGImage:fImageRef];
CGImageRelease(fImageRef);

CGImageRef sImageRef = CGImageCreateWithImageInRect([f CGImage], sndRect);
UIImage *sCroppedImage = [UIImage imageWithCGImage:sImageRef];
CGImageRelease(sImageRef);

UIImageView *first = [[UIImageView alloc]initWithFrame:fstRect];
first.image = fCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIImageView *second = [[UIImageView alloc]initWithFrame:sndRect];
second.image = sCroppedImage;
//second.contentMode = UIViewContentModeBottom;

UIView *blank = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 460)];
blank.backgroundColor = [UIColor darkGrayColor];

[UIView animateWithDuration:2.0 animations:^{
second.center = CGPointMake(second.center.x, second.center.y+75);
}];
``````

You can uncomment the two `.contentMode` lines and the quality will improve but in my case the subview has an offset of 10px or so (you can see it by setting a background color to both subviews)

//EDIT 2: ok found that bug. Had used the whole 320x480 screen, but had to cut off the status bar so it should be 320x460 and all is working great ;)

## Activity Indicator when integrated into Searchbar does not display in iPhone SDK

Hi Guys,

In my iPhone app, I want to add activity indicator on top of a searchbar.

When it is searching it should display activity indicator.

I have added the activity indicator in XIB and created its outlet.

I am making it hide when the searching finishes.

but Activity Indicator does not display.

Problem

I figured out that search function(say A)(where I animate the activity indicator) in turn calls another function(say B) so the main thread is being used in executing the function B. But for activity indicator to animate we require the main thread.

So I tried calling function B using `performSelectorInBackGround:withObject` method. Now when I click search the activity indicator is shown but the functionality of function B does not execute.

What can be a work around for this?

Thanks

Thanks to all the guys for your immense help and for appreciating the question.

Sorry to those whom I couldnt reply back.

I have got the solution and it is as follows.

I just wrote the below line in Search button click event.

`[NSThread detachNewThreadSelector:@selector(threadStartAnimating:) toTarget:self withObject:nil];`

And defined the function `threadStartAnimating:` as follows:

``````-(void)threadStartAnimating:(id)data
{
[activityIndicator setHidden:NO];
[activityIndicator startAnimating];
}
``````

Hope this helps someone.

Thanks once again.

## Does UIViewController's presentModalViewController:animated: retain the modal controller?

This has implications on the way I interact with my modal controllers. When I first started out in iOS development, I assumed that `UIViewController` did not retain the modally presented view. Well, really it was more like I had no reason to assume it did retain them. This left me with fairly awkward attempts at releasing them when I knew they would have finished their dismissal animations:

``````_myViewController = [[UIViewController alloc] init];
[self. present modalViewController:_myViewController animated:YES];
/*
Some stuff, then in a different method all together,
probably as the result of a delegate callback or something...
*/
[self dismissModalViewControllerAnimiated:YES];
[_myViewController performSelector:@selector(release) withObject:nil afterDelay:0.5f];
``````

Then, I saw the `modalViewController` property of `UIViewController` and thought, "Man, I hope it retains that property when a modal view controller is presented." Sure enough, I logged the retain count on several of these attempts and noticed a general increase immediate after the call to `presentModalViewController:animated:` (I know, retain counts are not a perfect metric). So, somewhere along the line, I have started using a much nicer pattern where I assume that any controller object I present modally is retained by the presenting controller. This lets me write the standard present code:

``````UIViewController* myViewController = [[UIViewController alloc] init];
[self presentModalViewController:myViewController animated:YES];
[myViewController release]; // <- Fire and forget!
``````

Now, of course, there is no awkwardness: no need to wait for an animation to finish, or even keep a reference to the presented controller if I don't need it. I can blindly dismiss it later and not worry about leaking. I like it.

I have logged many a dealloc in my modally presented controllers and they are always called precisely when I want, which leads me to feel confident in my approach: `UIViewController`'s `presentModalViewController:animated:` retains the presented controller as the `modalViewController` property.

But, and this is the meat of this question, I realized that I can't confirm this as documented behavior. And if it's not documented, I should not feel nearly as safe as I do, because Apple makes no promises about the longevity of undocumented behavior. The `modalViewController` property is publicly `readonly`, so I can only assume a retain behind the scenes, and the documentation on `presentModalViewController:animated:` says only:

Sets the modalViewController property to the specified view controller.

"Sets" could be `assign` or `retain`. Nothing I read blatantly confirms or denies my position. Since this is an assumption I make often, I would really love it if someone could point out a fact that I have missed somewhere in the bowels of documentation to put my mind at ease about the legitimacy of this practice.

EDIT: In the ebb and flow of day-to-day life in the iOS SDK, I found myself in the header for UIViewController and started reading some of it. I gleaned some useful info that reminded me of this question and I decided to post it, in the event some future user stumbles upon this question and wants as much info as possible to satisfy their paranoia of a very standard practice. The little morsel is simply this, from the @interface ivar block in UIViewController.h:

``````UIViewController *_childModalViewController;
``````

As opposed to these other declarations:

``````UIViewController *_parentViewController; // Nonretained
NSHashTable      *_childViewControllers; // Nonretained
``````

The comments seem to explicitly state what is not retained. By virtue of a lack of comment on the modal view controller ivar declaration, it would seem it is retained.

The memory management rules of Objective-C define the behaviour, so it doesn't need to expressly document that it retains the modal view controller. If an object needs to keep a passed object around after the method has finished executing, it will retain the object unless otherwise specified.

So in this case, you should just pass the view controller to `presentModalViewController:animated:` and then release it (or use autorelease).

This applies everywhere in Objective-C. If an object takes another object as method input, you never have to retain that object on its behalf.

As requested in the comments, if you read Apple's documentation on memory management, then you'll find the section on Weak References, which states:

Important: In Cocoa, references to table data sources, outline view items, notification observers, and delegates are all considered weak (for example, an NSTableView object does not retain its data source and the NSApplication object does not retain its delegate). The documentation only describes exceptions to this convention.

This actually states that this is a convention in itself and that exceptions will be stated in the documentation, however, going to the documentation for `NSTableView` and looking at the `setDataSource:` method, we see:

Discussion In a managed memory environment, the receiver maintains a weak reference to the data source (that is, it does not retain the data source, see Communicating With Objects). After setting the data source, this method invokes tile.

This method raises an NSInternalInconsistencyException if anObject doesn’t respond to either numberOfRowsInTableView: or tableView:objectValueForTableColumn:row:.