150+ ways to improve performance of iOS Application — I

Shrawan K Sharma
34 min readMar 12, 2023

What is performance ?

  • Mobile performance refers to the non-functional quality attributes of a mobile app related to how well it behaves (load time, response times, etc.) and how it uses the resources available in the device where it runs.
  • Performance is an aspect of software design that is often overlooked until it becomes a serious problem. If you wait until the end of your development cycle to do performance tuning, it may be too late to achieve any significant improvements. Performance is something to include early in the design phase and continue improving all throughout the development cycle.

Parameters to improve performance?

  • Short response time for a given piece of work.
  • High throughput (rate of processing work).
  • Low utilization of computing resource(s).
  • High availability of application.
  • Short data transmission time.
  • Application size
  • Compile/Build/Run time
  • Frames per second/frequency (rate)
  • App crash/Stability
  • Testability

Let’s analyse/investigate different aspects for improving performance of iOS application :-

  1. Method Dispatch
  2. Concurrency
  3. Watchdog terminations
  4. Type casting
  5. Multi-chaining using map, reduce, filter
  6. Thread explosion
  7. Tableview performance / Frame rate
  8. NSCaching vs Dictionary
  9. Memory allocation
  10. Resource crunch
  11. Coredata performance
  12. Location updates
  13. Frequent Analytics log
  14. Composition over Inheritance
  15. Multiple task
  16. Long pooling vs short pooling
  17. Background thread
  18. Mixing struct and class
  19. Low battery
  20. Low network
  21. Local storage
  22. Limiting animation
  23. Whole optimisation
  24. Bitcode enabled
  25. Memory consumption
  26. Stack and Heap memory
  27. Dynamic text for different language
  28. for-each vs map vs
  29. Codable
  30. Float/Double vs Int vs UInt16
  31. Property wrappers
  32. Stack view vs normal views
  33. App lifecycle
  34. Notifications
  35. Observer
  36. Data structure
  37. Timer / Run loop
  38. Size classes
  39. Library/Module linking
  40. Weak vs Unowned
  41. Downloading
  42. Thread issues — Deadlock/Race condition/Priority Inversion
  43. mutating vs non mutating
  44. Animation
  45. Try/Catch throw
  46. Operation vs Dispatch queue
  47. Shallow copy vs Deep copy
  48. Atomic
  49. Re-render of layers
  50. SubView lifecycle
  51. Method swizzling
  52. Library usages / Size / Usecase
  53. Improve compilation time
  54. Improve build time
  55. Improve deployment process
  56. Jenkins
  57. Connection tear up / retry
  58. Encryption/Decryption key size
  59. SSL Pinning
  60. Computation logic in frontend due to limit of resource
  61. Remove unnecessary code from API Json
  62. Unused unnecessary code
  63. Pagination
  64. SOLID Principe and design pattern use
  65. Deinit
  66. Early exit using guard let
  67. Using lazy initilization
  68. Memory leak
  69. switch statements vs if else
  70. inout parameters
  71. Multiple API calls
  72. Delayed View loading
  73. Operation vs GCD
  74. sync vs async
  75. Memory conflict
  76. Pure functions/ no side effects
  77. Pure components
  78. dequeueReusableCell / Height calculation
  79. Security vs performance
  80. Architecture / DRY principles
  81. Multiplatform support
  82. Image loading (load small size and then large image)
  83. Progress bar to show heavy processing
  84. UX to show high performance task
  85. Extra screen before login screen to call other api
  86. Real time data websocket only one connection
  87. HDL/LLD — System design
  88. Over user of Silent notification
  89. Timer/Runloop
  90. Remove navigation stack if memory is low
  91. Code quality / Code coverage
  92. Binary search vs normal search
  93. Debouncing
  94. Avoid unnecessary I/O
  95. Prefetching
  96. Check website response time
  97. Lodash in swift
  98. Inline functions
  99. Equality comparison / Equatable / Hashable
  100. 1x/2x/3x image / svg vs png vs jpeg / 16-32Bit
  101. Dynamic icon loading
  102. Image size / Image loading optimization
  103. Video buffering / Live streaming
  104. Quick Dev/QA/Prod environment buid IPA
  105. Don’t sync design with android. Each platform has own default “Human Interface Guidelines”
  106. Time profiling
  107. Optimize data query
  108. Incremental build
  109. Divide in module
  110. Universal linking
  111. Info.plist storage
  112. Reduce app size / Incremental build
  113. Sensitive data
  114. Stringbuilder vs string
  115. App lunch time
  116. Unsafe memory management
  117. Cocopods / Swift library manager compile time
  118. Biometric
  119. On demand resource
  120. Background fetch
  121. Autoclosure to improve performance
  122. UIResponder chaining
  123. Timer to logout
  124. Dispatch group
  125. Final/Static
  126. Singleton
  127. Multiple UIWindow
  128. Content hugging
  129. Multiple line textfield
  130. Higher order functions
  131. Escaping / nonescaping closures
  132. ABI stability
  133. Device token — Unnecessary api call
  134. UUID performance impact
  135. Multilanguage support
  136. Keychain storage
  137. Crash analysis
  138. Out of bound
  139. CI/CD
  140. Any vs any vs some
  141. KVO
  142. Testing module
  143. Access control access control exposing unnecessary code
  144. Scrollview performance
  145. Type method vs normal methods
  146. Map view performance
  147. Apple design vs Material design
  148. Support multiple device || screen classes
  149. Accessibility
  150. Unit test to check performance
  151. App rejection
  152. Audio recording performance
  153. CPU/Energy consumption
  154. Alerts
  155. Unwanted push notification
  156. Remove device-token from server
  157. Independent view composition lazy loading
  158. Downloading files
  159. Downloading images
  160. Creating reference of single image while sharing
  161. Locking screen after payment
  162. Protocol vs closure
  163. Closure capture
  164. Use of map/flatmap over ? sign to unwrap value
  165. Appending array / dictionary
  166. Debouncing — Search result
  167. Multiple storyboard
  168. Resources Release
  169. Minimize use of external libraries
  170. Cache Control- max-age
  171. Copy on write / Copy on assignment

Let’s visit above points one by one to analysis impact on performance :-

Dispatch techniques -

Increasing Performance by Reducing Dynamic Dispatch

Method dispatch is an algorithm used to decide which method will be invoked in response to a “message”. This algorithm is for example needed when inheritance is used and a method is called on object. In other words, it is the way of choosing what implementation will execute when you invoke a method.

There are 4 forms of dispatch:-

  • Inline (Fastest)
  • Direct/Static Dispatch
  • Virtual Dispatch
  • Dynamic Dispatch (Slowest)
  • When the compiler resolves implementation of the method that at compile time it is Static dispatch. When it is resolved at runtime, it is Dynamic dispatch.
  • Static dispatch results in faster code. Dynamic dispatch gives flexibility when you think and code in terms of OOP. In general, Swift uses dynamic dispatch for pure OOP things. Like classes, protocols and closures. For value types, static dispatch is used.
  • Static Dispatch is supported by both value types and reference types. Dynamic Dispatch is supported only by reference types(i.e. Class)
  • Dynamic dispatch technique provides flexibility to the developer in the form of Polymorphism.
  • We can make use of dynamic keyword and we need to prefix it with @objc keyword so as to expose method to Objective-C runtime
  • Inline When you declare a function/method as inline you tell the compiler to always replace call to this function or method by the content of this function or method and hence prevent an overhead. Compiler will determine if it inline methods or dynamic calls.
  • The goal of inline caching is to speed up runtime method binding by remembering the results of a previous method lookup directly at the call site. Inline caching is especially useful for dynamically typed languages where most if not all method binding happens at runtime and where virtual method tables often cannot be used.
  • In Swift, dynamic dispatch defaults to indirect invocation through a vtable. If one attaches the dynamic keyword to the declaration, Swift will emit calls via Objective-C message send instead. In both cases this is slower than a direct function call because it prevents many compiler optimizations [2] in addition to the overhead of performing the indirect call itself. In performance critical code, one often will want to restrict this dynamic behavior.

In Objective-C all methods are resolved dynamically at runtime.

Swift will try to optimize method dispatch whenever it can. For instance, if you have a method that is never overridden, Swift will notice this and will use direct dispatch if it can.

How does Dispatch work?

The struct for Swift class representation lacks for method_list, but instead, as in C++, Swift classes have a vtable member which lists available methods in the class.

Below diagram is self explanatory- How vtable works and how it can add overhead to the system performance if we are not careful.

Inline functions

Avoid Excessive Function Inlining

Although inline functions can improve speed in some situations, they can also degrade performance on OS X if used excessively. Inline functions eliminate the overhead of calling a function but do so by replacing each function call with a copy of the code. If an inline function is called frequently, this extra code can add up quickly, bloating your executable and causing paging problems.

Used properly, inline functions can save time and have a minimal impact on your code footprint. Remember that the code for inline functions should generally be very short and called infrequently. If the time it takes to execute the code in a function is less than the time it takes to call the function, the function is a good candidate for inlining. Generally, this means that an inline function probably should have no more than a few lines of code. You should also make sure that the function is called from as few places as possible in your code. Even a short function can cause excessive bloat if it is made inline in dozens or hundreds of places.

Also, you should be aware that the “Fastest” optimization level of the GCC should generally be avoided. At this optimization level, the compiler aggressively tries to create inline functions, even for functions that are not marked as inline. Unfortunately, doing so can significantly increase the size of your executable and cause far worse performance problems due to paging.

The final keyword is a restriction on a declaration of a class, a method, or a property such that the declaration cannot be overridden. This implies that the compiler can emit direct function calls instead of indirect calls. For instance in the following C.array1 and D.array1 will be accessed directly. In contrast, D.array2 will be called via a vtable:

final class C {
// No declarations in class 'C' can be overridden.
var array1: [Int]
func doSomething() { ... }
}

class D {
final var array1: [Int] // 'array1' cannot be overridden by a computed property.
var array2: [Int] // 'array2' *can* be overridden by a computed property.
}

func usingC(_ c: C) {
c.array1[i] = ... // Can directly access C.array without going through dynamic dispatch.
c.doSomething() = ... // Can directly call C.doSomething without going through virtual dispatch.
}

func usingD(_ d: D) {
d.array1[i] = ... // Can directly access D.array1 without going through dynamic dispatch.
d.array2[i] = ... // Will access D.array2 through dynamic dispatch.
}

Concurrency —

Concurrency is the notion of multiple things happening at the same time. With the proliferation of multicore CPUs and the realization that the number of cores in each processor will only increase, software developers need new ways to take advantage of them. Although operating systems like OS X and iOS are capable of running multiple programs in parallel, most of those programs run in the background and perform tasks that require little continuous processor time. It is the current foreground application that both captures the user’s attention and keeps the computer busy. If an application has a lot of work to do but keeps only a fraction of the available cores occupied, those extra processing resources are wasted.

  • Concurrency adds complexity, concurrency is not a feature that you can graft onto an application at the end of your product cycle. Doing it right requires careful consideration of the tasks your application performs and the data structures used to perform those tasks. Done incorrectly, you might find your code runs slower than before and is less responsive to the user. Therefore, it is worthwhile to take some time at the beginning of your design cycle to set some goals and to think about the approach you need to take.
  • If you implemented your tasks using blocks, you can add your blocks to either a serial or concurrent dispatch queue. If a specific order is required, you would always add your blocks to a serial dispatch queue. If a specific order is not required, you can add the blocks to a concurrent dispatch queue or add them to several different dispatch queues, depending on your needs.
  • If you implemented your tasks using operation objects, the choice of queue is often less interesting than the configuration of your objects. To perform operation objects serially, you must configure dependencies between the related objects. Dependencies prevent one operation from executing until the objects on which it depends have finished their work.
  • Although you could create 10,000 operation objects and submit them to an operation queue, doing so would cause your application to allocate a potentially nontrivial amount of memory, which could lead to paging and decreased performance.
  • Threads are still a good way to implement code that must run in real time. Dispatch queues make every attempt to run their tasks as fast as possible but they do not address real time constraints. If you need more predictable behaviour from code running in the background, threads may still offer a better alternative.
  • If you are currently using semaphores to restrict access to a shared resource, you should consider using dispatch semaphores instead. Traditional semaphores always require calling down to the kernel to test the semaphore. In contrast, dispatch semaphores test the semaphore state quickly in user space and trap into the kernel only when the test fails and the calling thread needs to be blocked. This behaviour results in dispatch semaphores being much faster than traditional semaphores in the uncontested case. In all other aspects, though, dispatch semaphores offer the same behaviour as traditional semaphores.

Tips for Improving Efficiency

1. Consider computing values directly within your task if memory usage is a factor.
2. Identify serial tasks early and do what you can to make them more concurrent.
3. Avoid using locks
4. Rely on the system frameworks whenever possible.

3 . Watchdog terminations

Users expect apps to launch quickly, and are responsive to touches and gestures. The operating system employs a watchdog that monitors launch times and app responsiveness, and terminates unresponsive apps. Watchdog terminations use the code 0x8badf00d (pronounced “ate bad food”) in the Termination

Reasons for watchdog terminations:-

The watchdog terminates apps that block the main thread for a significant time. There are many ways to block the main thread for an extended time, such as:

  • Synchronous networking
  • Processing large amounts of data, such as large JSON files or 3D models
  • Triggering lightweight migration for a large Core Data store synchronously
  • Analysis requests with Vision
  • Not ending every background task that you begin.

4. Type casting

Apple doc says: Type casting is a way to check the type of an instance, or to treat that instance as a different superclass or subclass from somewhere else in its own class hierarchy.

  • Type casting enables you to check and interpret the type of a class instance at runtime.
  • Type casting in Swift is implemented with the is and as operators. is is used to check the type of a value whereas as is used to cast a value to a different type.

The is operator returns true if an instance conforms to a protocol and returns false if it doesn’t.

The as? version of the downcast operator returns an optional value of the protocol’s type, and this value is nil if the instance doesn’t conform to that protocol.

The as! version of the downcast operator forces the downcast to the protocol type and triggers a runtime error if the downcast doesn’t succeed.

How type casting impact performance ?

  • Casting from e.g. int to float will cost as the compiler will create real code to convert the one into the other.
  • In contrast an object type cast is at no cost. It’s just that eventually a method call will fail as the cast type is not what you tell it should be. It’s just that you pretend the object pointer will point to some legal object. The pointer itself is not changed.
  • Automatic conversion is a source of software bugs and often hurts performance.
  • Use the forced form of the type cast operator (as!) only when you are sure that the downcast will always succeed. This form of the operator will trigger a runtime error if you try to downcast to an incorrect class type.
  • The two ways to approach improving app performance from protocol conformance checks is to minimize the number of conformance and as? operations.
// Example 1:
func logEvent(_ event: Event) {
if let severity = (event as? EventSeverityProviding)?.severity {
sendToServer("Received log \(event.description)", severity: severity)
} else {
sendToServer("Received log \(event.description)")
}
}

// Example 2:
func logEvent<T: Event & EventSeverityProviding>(_ event: T) {
sendToServer("Received log \(event.description)", severity: event.severity)
}

func logEvent(_ event: Event) {
sendToServer("Received log \(event.description)")
}

In the second case, as long as the compiler knows the type of event at the callsite, it avoids the dynamic cast entirely.


Ref: https://www.emergetools.com/blog/posts/SwiftProtocolConformance
  • Existential types are also significantly more expensive than using concrete types

5. Multi-chaining using map, reduce, filter

There is nothing wrong with using high-order functions when we do NOT need to chain them. The performance is way better when we use built-in map function or slightly better/worse when we use built-in filter/reduce.

If we want to chain high-order functions we should consider not using them and implementing them as a for-in loop solution. The performance is way better, 2.37x faster than built-in functions.

Ref

6. Thread explosion

When designing tasks for concurrent execution, do not call methods that block the current thread of execution. When a task scheduled by a concurrent dispatch queue blocks a thread, the system creates additional threads to run other queued concurrent tasks. If too many tasks block, the system may run out of threads for your app.

- Each thread has a cost associated with it that impacts app performance. Each thread not only takes some time during creation but also uses up memory in the kernel as well as the apps memory space.1

- Each thread consumes approximately 1 KB of memory in kernel space.

- The main thread stack size is 1 MB and cannot be changed.

- Any secondary thread is allocated 512 KB of stack space by default.

- Note that the full stack is not immediately created. The actual stack size grows with use. So, even if the main thread has a stack size of 1 MB, at some point in time, the actual stack size may be much smaller.

- Before a thread starts, the stack size can be changed. The minimum allowed stack size is 16 KB, and the size must be a multiple of 4 KB.

- The time taken to actually start a thread after creation ranged from anywhere between 5 ms to well over 100 ms, averaging about 29 ms. That can be a lot of time, especially if you start multiple threads during app launch.

for i in 0 ..< 500_000 {
DispatchQueue.global().async {
print(i)
}
}

That dispatches half a million work items to a queue that can only support 64 worker threads at a time. That is “thread-explosion” exceeding the worker thread pool.

You should instead do the following, which constraints the degree of concurrency with concurrentPerform:

DispatchQueue.global().async {
DispatchQueue.concurrentPerform(iterations: 500_000) { i in
print(i)
}
}

operationQueue.maxConcurrentOperationCount = 4
// The maximum number of queued operations that can run at the same time.

operatinQueue.maxConcurrentOperationCount = OperationQueue.defaultMaxConcurrentOperationCount
//The operation queue determines this number dynamically based on current system conditions.
// You may monitor changes to the value of this property using Key-value observing. Configure an observer to monitor the maxConcurrentOperationCount key path of the operation queue.

let semaphore = DispatchSemaphore(value: 5)
//The use of semaphore to limit the amount of work happening at once

7. Tableview performance / Frame rate

  • Reusable from the created Cell and then display it to the screen
  • TableView needs to know the height of the Cell to layout the Cell. You need to know the height of all the Cells to know the height of the TableView itself. Therefore, every time you render cell, you need to calculate the height of all the Cells. We want to minimize the complexity of height calculations. Cache cellHeight as an attribute of data. For each cell corresponding to each data, only one height needs to be calculated
  • UIImage, UIFont, NSDateFormatter or any object that is needed for drawing should be stored upfront
  • Reduce the number and level of sub-views. The deeper the subview is, the more computation is required to render to the screen.
  • Reduce the transparent layer of the child View
  • Adding a shadow to the View in the Cell can cause performance problems
  • Reduce Usage Of Non-Opaque Views As Much As Possible. An opaque view is a view that has no transparency, meaning that any UI element placed behind it is not visible at all.
  • If a view is set to opaque, then the drawing system will just put this view in front and avoid the extra work of blending the multiple view layers behind it
  • Image caching
  • Don’t do synchronous fetches (network calls, disk reads etc.)
  • Avoid use of boundingRectWithSize for text measurements since it leads to heavy processing
  • Limiting usage of hidden when configuring cells
  • Limiting complexity of autolayout.
  • Cell heights all pre-calculated and stored
  • Strive to make all subviews of all cells opaque.
  • Make sure you are not doing any expensive calculations in cellForRowAtIndexPath callback
  • If possible When creating a TableView, set its rowHeight property directly.

NSCache vs NSMutableDictionary-

NSCache

  • A cache is a collection of objects or data that can greatly increase the performance of applications.
  • Developers use caches to store frequently accessed objects with transient data that can be expensive to compute. Reusing these objects can provide performance benefits, because their values do not have to be recalculated. However, the objects are not critical to the application and can be discarded if memory is tight. If discarded, their values will have to be recomputed again when needed.
  • A mutable collection you use to temporarily store transient key-value pairs that are subject to eviction when resources are low.
  • Unlike an NSMutableDictionary object, a cache does not copy the key objects that are put into it.
  • NSCache provides two other useful "limit" features: limiting the number of cached elements and limiting the total cost of all elements in the cache. To limit the number of elements that the cache is allowed to have, call the method setCountLimit:. For example, if you try to add 11 items to a cache whose countLimit is set to 10, the cache could automatically discard one of the elements.
  • When adding items to a cache, you can specify a cost value to be associated with each key-value pair. Call the setTotalCostLimit: method to set the maximum value for the sum of all the cached objects’ costs. Thus, when an object is added that pushes the totalCost above the totalCostLimit, the cache could automatically evict some of its objects in order to get back below the threshold.
  • NSCache is like an NSMutableDictionary, except that Foundation may automatically remove an object at any time to relieve memory pressure.
  • This is good for managing how much memory the cache uses, but can cause issues if you rely on an object that may potentially be removed.
  • NSCache also stores weak references to keys rather than strong references.
  • iOS will automatically remove objects from the cache if the device is running low on memory.

Resource crunch-

didReceiveMemoryWarning() :-

Sent to the view controller when the app receives a memory warning.

Your app never calls this method directly. Instead, this method is called when the system determines that the amount of available memory is low.

You can override this method to release any additional memory used by your view controller. If you do, your implementation of this method must call the super implementation at some point.

Location updates-

Unnecessary location update doesn’t help application nor Server

How to reduce location updates in the application like Ola, Swiggy ?

Frequent Analytics log-

  • You might have used Analytics log for your iOS Applications. You might have different logs for Screen View, Button Clicked, Payment options, app recording, app lunch & almost everywhere where you want…..
  • You might have integrated 2–3 Analalytics SDK to track events

But do you know it can impact your app performance.

// Datadog Analalytics

let logger = Logger.builder
.sendNetworkInfo(true)
.printLogsToConsole(true, usingFormat: .shortWith(prefix: "[iOS App] "))
.set(datadogReportingThreshold: .info)
.build()

logger.debug("A debug message.")
logger.info("Some relevant information?")
logger.notice("Have you noticed?")
logger.warn("An important warning…")
logger.error("An error was met!")
logger.critical("Something critical happened!")
logger.info("Clicked OK", attributes: ["context": "onboarding flow"])


// Google Analytics
Analytics.logEvent(AnalyticsEventSelectContent, parameters: [
AnalyticsParameterItemID: "id-\(title!)",
AnalyticsParameterItemName: title!,
AnalyticsParameterContentType: "cont",
])

// AWS Analytics
let eventClient = AWSMobileAnalytics(forAppId: "MyMobileAnalyticsAppId").eventClient

guard let client = eventClient else {
print("Error creating AMA event client")
return
}
guard let event = client.createEvent(withEventType: "test_50_logIn") else {
print("Error creating AMA event")
return
}
event.addAttribute("username", forKey: "sample")
event.addAttribute("device", forKey: "ios")
client.record(event)

client.submitEvents()


// Private logs service

- What happen when app network connectivity is low ?
- You have given importance to analytics rather then other important API service which impact users experience.
- What happen when frequent log to Analytics server will affect server performance ?
- What happen when number of hits to server will directly impact costing of server ?
Let’s suppose FB send’s log’s to server for every event then what will be cost to the Server ?

      
Different kinds of Events

AnalyticsEventAdImpression,
AnalyticsEventAddPaymentInfo,
AnalyticsEventAddShippingInfo,
AnalyticsEventAddToCart,
AnalyticsEventAddToWishlist,
AnalyticsEventAppOpen,
AnalyticsEventBeginCheckout,
AnalyticsEventCampaignDetails,
AnalyticsEventEarnVirtualCurrency,

Send Analytics event based on priority

  • Different Analytics event have different priority

High priority :- Payment, Subscribe , Ads Clicked
Low priority :- screen View, Navigation, scrolling, gesture,

  • Store event information of Low priority to local DB using Coredata
  • Send High priority event to Analytics by concatenate with local DB events
  • Whenever app move to background/Inactive/foreground states then send local DB event to server, so that event doesn’t get lost
  • Use timer or event count techniques to send local DB Analytics to server.

Composition over Inheritance-


class BaseViewController:UIViewController{
var completionHandler: (() -> Void)?
override func viewDidLoad() {
super.viewDidLoad()
}

func checkInternetConnection(){}
func analyticsEvent(){}
func viewSetting(){}
func navigationBarSettings(){}
func checkBackgroundState(){}
func checkLoginState(){}
func networkChecking(){}
func emptyState(){}
func moveToRootViewController(){}
func securityChecking(){}
final func coreData(){}
}

extension BaseViewController{
func chacheSetting(){}
func lowMemoryCheck(){coreData()}
}

class SimpleViewController:BaseViewController{
override func viewDidLoad() {
super.viewDidLoad()
}
/*
SimpleViewController not need below functionality
func checkBackgroundState(){}
func baseViewModel(){}
func networkChecking(){}
func emptyState(){}
func functionality4(){}
func moveToRootViewController(){}
func securityChecking(){}
*/
}

let vc = SimpleViewController()


protocol AnalyticsProtocol{
func catptureEvent() -> Int
}

protocol CheckBackgroundStateProtocol{
func backgroundState()
}

protocol ViewContollerProtocol{
func viewSetting()
}


class SimpleViewController:UIViewController{
override func viewDidLoad() {
super.viewDidLoad()
}
}

extension SimpleViewController{
func catptureEvent() {

}
}

extension SimpleViewController:ViewContollerProtocol{
func viewSetting() {}
}

class CompleViewController:UIViewController{
var analytics:AnalyticsProtocol
var backgroundCheck:CheckBackgroundStateProtocol
init(analytics: AnalyticsProtocol, backgroundCheck:CheckBackgroundStateProtocol) {
self.analytics = analytics
self.backgroundCheck = backgroundCheck
super.init(nibName: nil, bundle: nil)
}

required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}

struct CoreAnalytics:AnalyticsProtocol{
func catptureEvent() -> Int {
return 2
}
}

struct BackgroundCheck:CheckBackgroundStateProtocol{
func catptureEvent() {
}
}

let complexVC = CompleViewController(analytics: CoreAnalytics(), backgroundCheck: BackgroundCheck())

struct MockCoreAnalytics:AnalyticsProtocol{
func catptureEvent() -> Int{
return 3
}
}

struct MockBackgroundCheck:CheckBackgroundStateProtocol{
func catptureEvent() {

}
}

let mockComplexVC = CompleViewController(analytics: MockCoreAnalytics(), backgroundCheck:MockBackgroundCheck())
XCTAssertNotEqual( mockComplexVC.analytics.catptureEvent(), 1 )

Multiple task-


Task1 depends on Task2
Task2 depends on Task3
Task4 depends on Task3
Task5 indepedent

We can use different Sequence for execution of above tasks:-

1. Task3 ----> Task2 ----> Task1 ----> Task4 ----> Task5 ----->(Notify)

2. Task3 ----> Task2
-----> Task1
Task5

3. Task5----->(High priority)
Task3(High priority) ----> Task2
-----> Task1

4. Task5----->(High priority)
Task3 ----> Task2
-----> Task1

5. Main Thread(Blocked)----> Task5----->(High priority)
Task3 ----> Task2
-----> Task1



----> Denotes starting point of task

As you can see in above 5 sequence, timing of completion of tasks will very based on technique that will be applied — Serial & Concurrent , async & sync, Dispatch group , Operation Queue , GCD, Semaphore

Sync and async primarily affect the source of the submitted task, i.e. the queue where it is being submitted from. When running in sync, your app will wait and block the current run loop until the execution finished before moving to the next task. Alternatively, a task that run async will start but return execution to your app immeditely. This way app is free to run other tasks whil the first one is executing.

Serial queue only have a single thread associated with them and thus only single task to be executed at any given time. Concurrent queue is able to utilize as many thread as system has resources for. Thread will be created and releases as necessary on concurrent queue.

Dispatch groups are used when you have a load of things you want to do that can all happen at once, but you need to wait for them all to finish before doing something else.

If your used wrong methodology then you will facing performance issue.

Note:- We need to figure out how to avoid threading issue such as deadlock, race conditions, priority inversion without affecting performance.

Task1 depends on Task2 
Task2 depends on Task3
Task4 depends on Task3
Task5 indepedent

If above tasks need to executed for 1000 times then how we can change our approach?

-> If each task are heavy process/HTTP Call/time consuming, which has no side effects then we can use
NSCache, URLCache, Hashmap techniques

-> Cache-control is an HTTP header used to specify browser caching policies in both client requests
and server responses. Policies include how a resource is cached,
where it’s cached and its maximum age before expiring

Long pooling vs short pooling

  • Periodically check for data. For instance, you could send a request to the server for data every two seconds. Why is this a bad idea? Every request to a server costs someone something — if you have access to an API, then you are likely paying for the API per request, and you don’t want to send any unnecessary requests if the data isn’t actually updating every two second.
  • Polling is a technique by which the client asking the server for new data regularly. We can do polling in two ways: Short Polling and Long Polling
  • Short polling is an timer that calls at fixed delays whereas Long polling is based on Comet (i.e server will send data to the client when the server event happens with no delay).
  • Make a request to the server for data, and hold the connection until there is new data. The benefit is less requests and only when you need them. The disadvantage is that you still have to make a new requests and connections to the server after you receive new data. Details on downsides of long-polling.
  • Long-polling is when a client sends an HTTP request with a long timeout, and the server uses the timeout to send data to the client. Long-polling works, but it has a drawback: it ties up server resources for the duration of the long poll, even though no data is available to submit.
  • Long-polling is a method of maintaining a close-to-persistent connection between client and server. In long-polling, the client sends a request to the server, and the connection is not closed until the server responds. Immediately following the response and the closing of the connection, the browser resends the request, thereby reopening the connection. While this does create the impression of a persistent connection between client and server, it does not truly enable bidirectional communication, and is a far less versatile than WebSockets.
  • APNS is preferred over polling as when using APNS, your app gets notified about the updates on an as needed basis. Another benefit is that polling will only work when your app is active and in foreground.
  • HTTP Long Polling is a technique used to push information to a client as soon as possible on the server. As a result, the server does not have to wait for the client to send a request.
  • HTTP Long polling is a mechanism where the server can send data independently or push data to the client without the web client making a request. The information is then pushed as it becomes available, which makes it real-time.

Mixing struct and class

This is interesting discussion in community — When I should use class || Struct ?


struct LoginStatus{
var lastLogin:Date?
var holidayList:[Date]?
}

struct User{
var id:String
var name:String
var Address:String
var loginState:LoginStatus
var tasks:[Task]
}

extension User:Hashable{
static func == (lhs: User, rhs: User) -> Bool {
lhs.id == rhs.id
}

func hash(into hasher: inout Hasher) {
hasher.combine(id)
}
}

extension User{
func addTask(task:Task){}
}

struct Task:Equatable{
var taskId:String
var name:String
var projectID:String
var lastupdated:Date
var assignee:[User]
}

extension Task{
func operation(_ user:[User]){}
}

struct MappingTask{
var assigned:[User:[Task]?]
}

struct UserMap{
var assigneList = [User:[Task]?]()
subscript(index: User) -> [Task]? {
get {
return assigneList[index]?.flatMap({$0})
}
}
}

Value type is stored in Stack that’s why it is faster & Reference type stored in heap which required extra operation to excess value and manage retain count of that object which is overhead.

If you want to access class variables, it has to look stack and heap both.

Every instance of a reference type has extra two fields that are used internally by CLR. ObjectHeader & MethodTable

In the Standard Library, examples of value types with child references are String, Array, Dictionary and Set. These value types contain internal reference types that manage the storage of elements in the heap, allowing them to increase/decrease in size as needed.

Since heap operations are more expensive than stack ones, copying heap allocated value types is not a constant operation like in stack allocated ones. To prevent this from hurting performance, the Standard Library’s extensible data structures are copy-on-write.

The more reference types you have inside of a value type, the more reference counting overhead you are going to have when copying it, leading to potentially nasty performance issues.

In above examples

var task:Task = Task(taskId: UUID().uuidString, name: "Task name", projectID: UUID().uuidString)
var user:User = User(id: UUID().uuidString, name: "User")

for index in (1...500000){
getAccessToTask(index,user,task)
}

func getAccess(_ index:Int, _ user:User, _ task:Task){
let id = user.id
let task = task.taskId
}


struct behavior it should create so many heap allocations,
since assigning struct will create a copy.

Here struct Task & User contains lots of variables String which need to stored in heap

Swift Strings almost always have a reference representation. By contrast, Swift Ints almost always have a value representation. But there are exceptions. Short strings of common characters can be represented as “tagged pointers”, where the string value is stored inside the reference. Ints bridged from Obj-C NSNumber objects can be represented as referenced objects or as tagged pointers, as well as the actual values.

Similarly, Array and Dictionary can be expected to have a reference representation even though they are value objects, but it’s possible that some common values (e.g. empty ones) might have a value or tagged pointer representation, too.

Copy on Write

  • Make a copy only when it is necessary(e.g. when we change/write). By default Value Type does not support COW(Copy on Write) mechanism. But some of system structures like Collections(Array, Dictionary, Set) support it
  • If we just assign struct variables, it keeps pointing to the same heap until any modification is not made.
  • In a situation where we have so many reference types store into the struct, Apple creates a copy of struct only when we modify some property of struct.
  • Copy-on-write comes built-in with all collections from the Swift standard library
  • Swift arrays are values, but the content of the array is not copied around every time the array is passed as an argument because it features copy-on-write traits.
  • Array is implemented with copy-on-write behaviour – you'll get it regardless of any compiler optimisations.
  • At a basic level, Array is just a structure that holds a reference to a heap-allocated buffer containing the elements – therefore multiple Array instances can reference the same buffer. When you come to mutate a given array instance, the implementation will check if the buffer is uniquely referenced, and if so, mutate it directly. Otherwise, the array will perform a copy of the underlying buffer in order to preserve value semantics.
  • If Struct has lots of reference types and we need to mutate lots of objects then the class may be helpful.
  • Swift always store reference type or String type variable into the heap.
  • If you are using too many structs with large size, this can put an excessive strain on available resources.
  • When nested struct gets assigned to another struct instance, the reference type DO NOT GET COPIED, but rather they both point to the same memory address.
  • Struct that has a mutating function and preserve value semantics
  • In Swift when you have large value type and have to assign or pass as a parameter to a function, copying it can be really expensive in terms of performance because you’ll have to copy all the underlying data to another place in memory.
  • If Struct has lots of reference types and we need to mutate lots of objects then the class may be helpful in this case.
//collection(COW is realized)
var collection1 = [A()]
var collection2 = collection1

//same addresses
print(address(&collection1)) //0x600000c2c0e0
print(address(&collection2)) //0x600000c2c0e0

//COW for collection2 is used
collection2.append(A())
print(address(&collection2)) //0x600000c2c440


struct A {
var value: Int = 0
}

//Default behavior(COW is not used)
var a1 = A()
var a2 = a1

//different addresses
print(address(&a1)) //0x7ffee48f24a8
print(address(&a2)) //0x7ffee48f24a0

//COW for a2 is not used
a2.value = 1
print(address(&a2)) //0x7ffee48f24a0

Use COW semantics for large values to minimise copying data every time. There are two common ways:

use a wrapper with value type which support COW.
use a wrapper which has a reference to heap where we can save large data. The point is:
we are able to create multiple copies of lite wrapper which will be pointed to the same large data in a heap
when we try to modify(write) a new reference with a copy of large data will be created - COW in action. AnyObject.isKnownUniquelyReferenced() which can say if there is a single reference to this object


final class Ref<T> {
var val: T
init(_ v: T) { val = v }
}

struct Box<T> {
var ref: Ref<T>
init(_ x: T) { ref = Ref(x) }

var value: T {
get { return ref.val }
set {
if !isKnownUniquelyReferenced(&ref) {
ref = Ref(newValue)
return
}
ref.val = newValue
}
}
}

https://github.com/apple/swift/blob/main/docs/OptimizationTips.rst#advice-use-copy-on-write-semantics-for-large-values

You can improve your app’s performance by swapping unnecessary references with proper static size value types.

struct DeliveryAddress {
let identifier: String
let type: String
}

If identifier represents an UUID,
it can be safely replaced by Foundation's UUID struct, which is statically sized.

In a similar fashion, type could easily be a pre-defined enum instead.

struct DeliveryAddress {
enum AddressType {
case home
case work
}
let identifier: UUID
let type: AddressType
}

With these changes, this struct is now statically sized.
Not only reference counting overhead was eliminated, but it is also a lot more typesafe now.

Ref: //https://swiftrocks.com/memory-management-and-performance-of-value-types

Implementing copy for reference types

  • You can do so by conforming to NSCopying protocol and implementing the copy method.

Ref1 Ref2 Ref3 Ref4

Low battery-

Whenever battery is low, you can initiate various activities to reduce fast drainage of battery . End user will feel happy, “XYZ” application can be used for 4 hrs even if battery is 10% .

  • Low quality image / video displayed
  • Remove frequent location update logic
  • Stop background task
  • Switch from short polling to long polling
  • Off silent notification
  • Stop sending frequent notification
  • Stop unnecessary prefetching of data
  • Show required information for a time being. Ola driver is driving battery with 10% then his all the api service / screen should be invisible except location screen.
  • Remove animation effects
  • Reducing CPU and GPU performance
  • Reducing screen brightness.
  • Pausing discretionary and background activities.
var isLowPowerModeActive = ProcessInfo.processInfo.isLowPowerModeEnabled


NotificationCenter.default.addObserver(self, selector:
#selector(powerStateChanged(notification:)), name: Notification.Name.NSProcessInfoPowerStateDidChange,
object: nil)

func powerStateChanged(notif: NSNotification){

// lowerPowerEnabled = FALSE
startDownloading()
changeStateOfNotificaion()
changeStateOfVideoQuality()
changeStateOfLocationUpdate()
changeStateOfRealTimeUpdate() // Short polling

}

// Note: - Be careful while accesing isLowPowerModeActive from multiple thread

Low network connectivity-

Whenever network is low, you can initiate various activities to improve performance/usability of application.

  • Low quality image / video displayed
  • Remove frequent location update logic
  • Stop background task
  • Switch from short polling to long polling
  • Off silent notification
  • Stop sending frequent notification
  • Stop unnecessary prefetching of data
  • Store data in cache if payload is too large to send over poor network
  • Reduce number of concurrent api calls.
  • If timeout happens, then retry options to reinitiate the task/api
  • UX for slow loading

If we want users to like our software, we should design it to behave like a likeable person: respectful, generous and helpful.

  • Gamification technique to show loading
  • Loading indicator/ Animation to show progress

Local Storage-

What happen your local storage is almost full ?

End User has no idea about memory crunch issue & then Randomly app crashes while using it. We cannot lunch application becuse many application store data in local from 4–5 api’s.

It happen many times to me that “Linkedin” crashes while using in memory crunch situation but “Whatsapp” has implement beautiful feature to show memory full warning. Even if app crashes then User is aware of it & can plan required actions.

extension UIDevice {
func MBFormatter(_ bytes: Int64) -> String {
let formatter = ByteCountFormatter()
formatter.allowedUnits = ByteCountFormatter.Units.useMB
formatter.countStyle = ByteCountFormatter.CountStyle.decimal
formatter.includesUnit = false
return formatter.string(fromByteCount: bytes) as String
}

//MARK: Get String Value
var totalDiskSpaceInGB:String {
return ByteCountFormatter.string(fromByteCount: totalDiskSpaceInBytes, countStyle: ByteCountFormatter.CountStyle.decimal)
}

var freeDiskSpaceInGB:String {
return ByteCountFormatter.string(fromByteCount: freeDiskSpaceInBytes, countStyle: ByteCountFormatter.CountStyle.decimal)
}

var usedDiskSpaceInGB:String {
return ByteCountFormatter.string(fromByteCount: usedDiskSpaceInBytes, countStyle: ByteCountFormatter.CountStyle.decimal)
}

var totalDiskSpaceInMB:String {
return MBFormatter(totalDiskSpaceInBytes)
}

var freeDiskSpaceInMB:String {
return MBFormatter(freeDiskSpaceInBytes)
}

var usedDiskSpaceInMB:String {
return MBFormatter(usedDiskSpaceInBytes)
}

//MARK: Get raw value
var totalDiskSpaceInBytes:Int64 {
guard let systemAttributes = try? FileManager.default.attributesOfFileSystem(forPath: NSHomeDirectory() as String),
let space = (systemAttributes[FileAttributeKey.systemSize] as? NSNumber)?.int64Value else { return 0 }
return space
}

/*
Total available capacity in bytes for "Important" resources, including space expected to be cleared by purging non-essential and cached resources. "Important" means something that the user or application clearly expects to be present on the local system, but is ultimately replaceable. This would include items that the user has explicitly requested via the UI, and resources that an application requires in order to provide functionality.
Examples: A video that the user has explicitly requested to watch but has not yet finished watching or an audio file that the user has requested to download.
This value should not be used in determining if there is room for an irreplaceable resource. In the case of irreplaceable resources, always attempt to save the resource regardless of available capacity and handle failure as gracefully as possible.
*/
var freeDiskSpaceInBytes:Int64 {
if #available(iOS 11.0, *) {
if let space = try? URL(fileURLWithPath: NSHomeDirectory() as String).resourceValues(forKeys: [URLResourceKey.volumeAvailableCapacityForImportantUsageKey]).volumeAvailableCapacityForImportantUsage {
return space ?? 0
} else {
return 0
}
} else {
if let systemAttributes = try? FileManager.default.attributesOfFileSystem(forPath: NSHomeDirectory() as String),
let freeSpace = (systemAttributes[FileAttributeKey.systemFreeSize] as? NSNumber)?.int64Value {
return freeSpace
} else {
return 0
}
}
}

var usedDiskSpaceInBytes:Int64 {
return totalDiskSpaceInBytes - freeDiskSpaceInBytes
}

}

print("totalDiskSpaceInBytes: \(UIDevice.current.totalDiskSpaceInBytes)")
print("freeDiskSpace: \(UIDevice.current.freeDiskSpaceInBytes)")
print("usedDiskSpace: \(UIDevice.current.usedDiskSpaceInBytes)")

//Ref:- https://stackoverflow.com/a/47463829/4809746

Limiting animation-

App shouldn’t refresh content unnecessarily, such as in obscured areas on screen, or through excessive use of animations.

Every time your app updates (or “draws”) content to screen, it requires the CPU, GPU, and screen to be active. Extraneous or inefficient drawing can pull system resources out of low-power states or prevent them from powering down altogether, resulting in significant energy use.

  • Reduce the number of views your app uses.
  • Reduce the use of opacity, such as in views that exhibit a translucent blur. If you need to use opacity, avoid using it over content that changes frequently. Otherwise, energy cost is magnified, as both the background view and the translucent view must be updated whenever content changes.
  • Eliminate drawing when your app or its content is not visible, such as when your app’s content is obscured by other views, clipped, or offscreen.
  • Use lower frame rates for animations whenever possible. For example, a high frame rate may make sense during game play, but a lower frame rate may be sufficient for a menu screen. Use a high frame rate only when the user experience calls for it.
  • Use a consistent frame rate when performing an animation. For example, if your app displays 60 frames per second, maintain that frame rate throughout the lifetime of the animation.
  • Avoid using multiple frame rates at once on screen. For example, don’t have a character in your game moving at 60 frames per second, while the clouds in the sky are moving at 30 frames per second. Use the same frame rate for both, even if it means raising one of the frame rates.
  • The standard set of video controls provided by the AVPlayerViewController class automatically hide during media playback. Apps should avoid adding additional layers (even hidden ones) above full screen video without good reason. Displaying controls and other UI elements over a full-screen video when the user requests them—such as via a tap—is fine and expected behavior. However, these elements should be removed when the user isn’t interacting with them.
  • Don’t use too many layers as the amount of memory GPUs can spend on textures is often limited.

Ref Apple doc

Array/ dictionary Performance

var dict = ["one": [1], "two": [2, 2], "three": [3, 3, 3]]
print(dict)

//Performance issue
if dict["one"] != nil {
// ...
}
if let _ = dict["one"] {
// ...
}

These approaches provide the expected performance of a dictionary lookup but they read neither well nor "Swifty".
Checking the keys view reads much better but introduces a serious performance penalty: this approach requires a linear search through
a dictionary's keys to find a match.


//Correct way of checking keys
if dict.keys.contains("one") {
// ...
}
//A similar dynamic plays out when comparing dict.index(forKey:) and dict.keys.index(of:).


//Wrong way
// Direct re-assignment
dict["one"] = (dict["one"] ?? []) + [1]

// Optional chaining
dict["one"]?.append(1)

//Both approaches present problems. The first is complex and hard to read.
//The second ignores the case where "one" is not a key in the dictionary, and is therefore less useful even if more streamlined.
//Furthermore, neither approach allows the array to grow in place—they introduce an unnecessary copy of the array's contents even though dict is the sole holder of its storage.


//Write way
if let i = dict.index(forKey: "one") {
dict.values[i].append(1) // no copy here
} else {
dict["one"] = [1]
}


struct Dictionary<Key: Hashable, Value>: ... {
/// A collection view of a dictionary's keys.
struct Keys: Collection {
subscript(i: Index) -> Key { get }
// Other `Collection` requirements
}

/// A mutable collection view of a dictionary's values.
struct Values: MutableCollection {
subscript(i: Index) -> Value { get set }
// Other `Collection` requirements
}

var keys: Keys { get }
var values: Values { get set }
// Remaining Dictionary declarations
}

Ref: https://github.com/apple/swift-evolution/blob/main/proposals/0154-dictionary-key-and-value-collections.md
var values = [1,2,3,4]

values.append(5)

var values = [1,2,3,4]
print(values.capacity) // 4
values.append(5)
print(values.capacity) // 8


var values = [Int]()
values.reserveCapacity(512)
print(values.capacity) // 572

// Why 572 ???
// For performance reasons, the size of the newly allocated storage might
// be greater than the requested capacity. Use the array's capacity property
// to determine the size of the new storage.

for _ in 1...512 {
values.append(Int.random(in: 1...10))
}
print(values.capacity) //572

values.removeAll()
print(values.capacity) // 0


var values: [Int] = [0, 1, 2, 3]

// Don't use 'reserveCapacity(_:)' like this
func addTenQuadratic() {
let newCount = values.count + 10
values.reserveCapacity(newCount)
for n in values.count..<newCount {
values.append(n)
}
}


mutating func reserveCapacity(_ minimumCapacity: Int)
//Reserves enough space to store the specified number of elements.

To avoid constant reallocations, Swift uses a geometric growth pattern for array capacities — a fancy way of saying that it increases array capacity exponentially rather than in fixed amounts. So, when you add a fifth item to an array with capacity 4, Swift will create the resized array so that it has a capacity of 8. And when you exceed that you’ll get a capacity of 16, then 32, then 64, and so on — it doubles each time.

Now, if you know ahead of time that you’ll be storing 512 items, you can inform Swift by using the reserveCapacity() method. This allows Swift to immediately allocate an array capable of holding 512 items, as opposed to creating a small array then re-allocating multiple times.

Even though using reserveCapacity() can help speed up your code

Well this issue is mainly cause because of the .append function that it has to create the location and then fill it up,

You can make that slightly faster if you have an idea of the size of the array by giving it a size and that will allocate the space for it, instead of creating allocating and then filling it up trying this code gave me a slightly faster result.

var arr = [Int].init(repeating: 0, count: 1_000_000)
for i in 0..<1_000_000 {
arr[i] = i
}

Reduce the size of app / updates

  • The default optimization level for the Release configuration is Fastest, Smallest [-Os], which can make your compiled binary very small. Check your target’s build settings, and be sure you’re using this optimization level.
  • Asset catalogs allow Xcode and the App Store to optimize your app’s assets which can significantly reduce the size of your app. Use asset catalogs instead of putting your assets in your app bundle directly; then do the following
  • Tag each asset — for example images, textures, or data assets — with relevant metadata to indicate which devices the asset is for. Doing so maximizes the size reduction that app thinning provides, which can be significant for apps with assets that aren’t required by every device.
  • Use a property list for bundling any data with your app instead of using strings in code
  • Moving data and assets out of your source code and into asset files significantly reduces the size of your app’s binary
  • Using a more efficient image file format is a good way to reduce your app’s size. For example, consider using the HEIF format for images, and the HEVC format for videos.
  • If you’re using PNG files, consider using 8-bit instead of 32-bit PNGs. Doing so can decrease the image size to a quarter of the original size.
  • Compress images. For 32-bit images, using Adobe Photoshop’s “Save for Web” feature can reduce the size of JPEG and PNG images considerably.
  • Instead of always downloading the whole app when an update to the app is available, the App Store creates an update package. It compares one or more prior versions of your app to the new version and creates an optimized package. This package contains only the content that has changed between versions of your app, excluding any content that didn’t change.
  • Don’t make unnecessary modifications to files. Compare the contents of the prior and new versions of your app with diff or another directory comparison tool, and verify that it doesn’t contain any unexpected changes.
  • Store content that you expect to change in an update in separate files from content that you don’t expect to change.
  • Group the infrequently used resources into asset packs. When you upload your app to App Store Connect, asset packs don’t become part of your app’s initial download or app updates. Instead, the app can download them separately as needed. See the On-Demand Resources Guide for more information.
  • App thinning is a technology that ensures that an app’s IPA file only contains resources and code that’s necessary to run the app on a particular device.

Applde Doc Doc2

Disclaimer : I am not taking credit for this blog. I have taken reference from Apple/Swift documentation and other masterclass blog articles. I have tried my best to mention website url.

Thanks Again for reading article.

Let’s hope you enjoyed first part of performance improvement of iOS Applications. But I love to hear from you: what works, what doesn’t? Did I leave anything out? Are there any performance improvement strategies that you’d like to see included here? 🙏🙏

If you found this interesting, you will enjoy these related articles I wrote:

I will be writing upcoming blogs on Security/TDD/Testing/SwiftUI/DI/Interview questions. Feel free to add me on LinkedIn and follow to my Medium to get updates on next article.

--

--

Shrawan K Sharma

iOS App Developer - Frontend(ReactJS) - Building innovative products - MNNIT