¡@

Home 

2014/10/15 ¤U¤È 10:09:55

iphone Programming Glossary: gpu

What does the Tiler Utilization statistic mean in the iPhone OpenGL ES instrument?

http://stackoverflow.com/questions/1287811/what-does-the-tiler-utilization-statistic-mean-in-the-iphone-opengl-es-instrumen

hardware respectively. On the MBX Tiler Utilization typically scales with the amount of vertex data being sent to the GPU in terms of both the number of vertices and the size of the attributes sent per vertex and Fragment Utilization generally..

How can you track motion using the iPhone's camera?

http://stackoverflow.com/questions/3933716/how-can-you-track-motion-using-the-iphones-camera

an example produced by Apple for demonstrating Core Image at WWDC 2007. That example is described in Chapter 27 of the GPU Gems 3 book . The basic idea is that you can use custom GLSL shaders to process images from the iPhone camera in realtime.. that encapsulates OpenGL ES 2.0 shader processing of images and video. One of the recent additions to that is a GPUImageMotionDetector class that processes a scene and detects any kind of motion within it. It will give you back the centroid..

How to apply “filters” to AVCaptureVideoPreviewLayer

http://stackoverflow.com/questions/5156872/how-to-apply-filters-to-avcapturevideopreviewlayer

to track objects in the camera view but others have modified this code to do some neat video processing effects. All GPU based filters in this application that display to the screen run at 60 FPS on my iPhone 4. The only iOS device out there.. at 60 FPS on my iPhone 4. The only iOS device out there that supports video yet doesn't have an OpenGL ES 2.0 capable GPU is the iPhone 3G. If you need to target that device as well you might be able to take the base code for video capture and.. OpenGL ES 1.1 pipeline. Edit 2 13 2012 As an update on the above I've now created an open source framework called GPUImage that encapsulates this kind of custom image filtering. It also handles capturing video and displaying it to the screen..

How can I optimize the rendering of a large model in OpenGL ES 1.1?

http://stackoverflow.com/questions/5718846/how-can-i-optimize-the-rendering-of-a-large-model-in-opengl-es-1-1

question A Tiler Utilization of 100 indicates that your bottleneck is in the size of the geometry being sent to the GPU . Whatever you can do to shrink the geometry size can lead to an almost linear reduction in rendering time in my experience... you could look at using indexing which might cut down on geometry by eliminating some redundant vertices. The PowerVR GPUs in the iOS devices are optimized for using indexed geometry as well. Try using a smaller data type for your vertex information... out anything that reads the current state like all glGet calls because they really mess with the flow of the PowerVR GPUs. There are other things you can do that will lead to smaller performance improvements like using interleaved vertex normal..

CADisplayLink OpenGL rendering breaks UIScrollView behaviour

http://stackoverflow.com/questions/5944050/cadisplaylink-opengl-rendering-breaks-uiscrollview-behaviour

How can I improve the performance of my custom OpenGL ES 2.0 depth texture generation?

http://stackoverflow.com/questions/6051237/how-can-i-improve-the-performance-of-my-custom-opengl-es-2-0-depth-texture-gener

display 18 to 35 ms on iPhone 4 . According to the PowerVR PVRUniSCo compiler part of their SDK this shader uses 11 GPU cycles at best 16 cycles at worst. I'm aware that you're advised not to use branching in a shader but in this case that.. 1.0 2.0 normalizedDepth 1.0 it takes 18 35 ms on iPad 1 but only 1.7 2.4 ms on iPhone 4. The estimated GPU cycle count for this shader is 8 cycles. The change in render time based on cycle count doesn't seem linear. Finally if.. then use per pixel tests to generate a smooth intersection while many of the pixels from the rear impostor don't waste GPU cycles by being rendered. I hadn't thought to disable depth writes yet leave on depth testing when doing the last rendering..

Are the Core Image filters in iOS 5.0 fast enough for realtime video processing?

http://stackoverflow.com/questions/6625888/are-the-core-image-filters-in-ios-5-0-fast-enough-for-realtime-video-processing

while we can talk about some hard performance numbers. I created a benchmark application as part of the testing for my GPUImage framework and profiled the performance of raw CPU based filters Core Image filters and GPUImage filters with live video.. of the testing for my GPUImage framework and profiled the performance of raw CPU based filters Core Image filters and GPUImage filters with live video feeds. The following were the times in milliseconds each took to apply a single gamma filter.. versions iPhone 4 iOS 5 iPhone 4S iOS 6 CPU 458 ms 2.2 FPS 183 ms 5.5 FPS Core Image 106 ms 6.7 FPS 8.2 ms 122 FPS GPUImage 2.5 ms 400 FPS 1.8 ms 555 FPS For Core Image this translates into a maximum of 9.4 FPS for a simple gamma filter on..

Most efficient way to draw part of an image in iOS

http://stackoverflow.com/questions/8035673/most-efficient-way-to-draw-part-of-an-image-in-ios

just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU . This means UIKit is being accelerated by GPU. So if you use them properly I mean within designed limitations it will perform.. on top of OpenGL which is a virtually direct interface to the GPU . This means UIKit is being accelerated by GPU. So if you use them properly I mean within designed limitations it will perform as well as plain OpenGL implementation... performance with UIView implementation because it can get full acceleration of underlying OpenGL which means GPU acceleration . Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders..

Learning OpenGLES 2.0 on iOS

http://stackoverflow.com/questions/8482327/learning-opengles-2-0-on-ios

there translate directly across to OpenGL ES 2.0 with only a little modification required. The books ShaderX6 ShaderX7 GPU Pro and GPU Pro 2 also have sections devoted to OpenGL ES 2.0 which provide some rendering and tuning hints that you won't.. directly across to OpenGL ES 2.0 with only a little modification required. The books ShaderX6 ShaderX7 GPU Pro and GPU Pro 2 also have sections devoted to OpenGL ES 2.0 which provide some rendering and tuning hints that you won't find elsewhere...

Will my iPhone app take a performance hit if I use Objective-C for low level code?

http://stackoverflow.com/questions/926728/will-my-iphone-app-take-a-performance-hit-if-i-use-objective-c-for-low-level-cod

my iPhone app take a performance hit if I use Objective C for low level code When programming a CPU intensive or GPU intensive application on the iPhone or other portable hardware you have to make wise algorithmic decisions to make your..