<data:blog.pageTitle/>

This Page

has moved to a new address:

https://beginicaraku.com

Sorry for the inconvenience…

Redirection provided by Blogger to WordPress Migration Service
android smartphone: October 2015

Tuesday, October 27, 2015

New in Android Samples: Authenticating to remote servers using the Fingerprint API

Posted by Takeshi Hagikura, Yuichi Araki, Developer Programs Engineer


As we announced in the previous blog post, Android 6.0 Marshmallow is now publicly available to users. Along the way, we’ve been updating our samples collection to highlight exciting new features available to developers.


This week, we’re releasing AsymmetricFingerprintDialog, a new sample demonstrating how to securely integrate with compatible fingerprint readers (like Nexus Imprint) in a client/server environment.


Let’s take a closer look at how this sample works, and talk about how it complements the FingerprintDialog sample we released earlier during the public preview.


Symmetric vs Asymmetric Keys



The Android Fingerprint API protects user privacy by keeping users’ fingerprint features carefully contained within secure hardware on the device. This guards against malicious actors, ensuring that users can safely use their fingerprint, even in untrusted applications.


Android also provides protection for application developers, providing assurances that a user’s fingerprint has been positively identified before providing access to secure data or resources. This protects against tampered applications, providing cryptographic-level security for both offline data and online interactions.


When a user activates their fingerprint reader, they’re unlocking a hardware-backed cryptographic vault. As a developer, you can choose what type of key material is stored in that vault, depending on the needs of your application:



  • Symmetric keys: Similar to a password, symmetric keys allow encrypting local data. This is a good choice securing access to databases or offline files.

  • Asymmetric keys: Provides a key pair, comprised of a public key and a private key. The public key can be safely sent across the internet and stored on a remote server. The private key can later be used to sign data, such that the signature can be verified using the public key. Signed data cannot be tampered with, and positively identifies the original author of that data. In this way, asymmetric keys can be used for network login and authenticating online transactions. Similarly, the public key can be used to encrypt data, such that the data can only be decrypted with the private key.


This sample demonstrates how to use an asymmetric key, in the context of authenticating an online purchase. If you’re curious about using symmetric keys instead, take a look at the FingerprintDialog sample that was published earlier.


Here is a visual explanation of how the Android app, the user, and the backend fit together using the asymmetric key flow:




1. Setting Up: Creating an asymmetric keypair



First you need to create an asymmetric key pair as follows:


KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC, "AndroidKeyStore");
keyPairGenerator.initialize(
        new KeyGenParameterSpec.Builder(KEY_NAME,
                KeyProperties.PURPOSE_SIGN)
                .setDigests(KeyProperties.DIGEST_SHA256)
                .setAlgorithmParameterSpec(new ECGenParameterSpec("secp256r1"))
                .setUserAuthenticationRequired(true)
                .build());
keyPairGenerator.generateKeyPair();


Note that .setUserAuthenticationRequired(true) requires that the user authenticate with a registered fingerprint to authorize every use of the private key.

Then you can retrieve the created private and public keys with as follows:


KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PublicKey publicKey =
        keyStore.getCertificate(MainActivity.KEY_NAME).getPublicKey();

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);


2. Registering: Enrolling the public key with your server



Second, you need to transmit the public key to your backend so that in the future the backend can verify that transactions were authorized by the user (i.e. signed by the private key corresponding to this public key).
This sample uses the fake backend implementation for reference, so it mimics the transmission of the public key, but in real life you need to transmit the public key over the network.


boolean enroll(String userId, String password, PublicKey publicKey);


3. Let’s Go: Signing transactions with a fingerprint



To allow the user to authenticate the transaction, e.g. to purchase an item, prompt the user to touch the fingerprint sensor.




Then start listening for a fingerprint as follows:


Signature.getInstance("SHA256withECDSA");
KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);
signature.initSign(key);
CryptoObject cryptObject = new FingerprintManager.CryptoObject(signature);

CancellationSignal cancellationSignal = new CancellationSignal();
FingerprintManager fingerprintManager =
        context.getSystemService(FingerprintManager.class);
fingerprintManager.authenticate(cryptoObject, cancellationSignal, 0, this, null);


4. Finishing Up: Sending the data to your backend and verifying



After successful authentication, send the signed piece of data (in this sample, the contents of a purchase transaction) to the backend, like so:


Signature signature = cryptoObject.getSignature();
// Include a client nonce in the transaction so that the nonce is also signed 
// by the private key and the backend can verify that the same nonce can't be used 
// to prevent replay attacks.
Transaction transaction = new Transaction("user", 1, new SecureRandom().nextLong());
try {
    signature.update(transaction.toByteArray());
    byte[] sigBytes = signature.sign();
    // Send the transaction and signedTransaction to the dummy backend
    if (mStoreBackend.verify(transaction, sigBytes)) {
        mActivity.onPurchased(sigBytes);
        dismiss();
    } else {
        mActivity.onPurchaseFailed();
        dismiss();
    }
} catch (SignatureException e) {
    throw new RuntimeException(e);
}


Last, verify the signed data in the backend using the public key enrolled in step 2:


@Override
public boolean verify(Transaction transaction, byte[] transactionSignature) {
    try {
        if (mReceivedTransactions.contains(transaction)) {
            // It verifies the equality of the transaction including the client nonce
            // So attackers can't do replay attacks.
            return false;
        }
        mReceivedTransactions.add(transaction);
        PublicKey publicKey = mPublicKeys.get(transaction.getUserId());
        Signature verificationFunction = Signature.getInstance("SHA256withECDSA");
        verificationFunction.initVerify(publicKey);
        verificationFunction.update(transaction.toByteArray());
        if (verificationFunction.verify(transactionSignature)) {
            // Transaction is verified with the public key associated with the user
            // Do some post purchase processing in the server
            return true;
        }
    } catch (NoSuchAlgorithmException | InvalidKeyException | SignatureException e) {
        // In a real world, better to send some error message to the user
    }
    return false;
}


At this point, you can assume that the user is correctly authenticated with their fingerprints because as noted in step 1, user authentication is required before every use of the private key. Let’s do the post processing in the backend and tell the user that the transaction is successful!


Other updated samples


We also have a couple of Marshmallow-related updates to the Android For Work APIs this month for you to peruse:


  • AppRestrictionEnforcer and AppRestrictionSchema
    These samples were originally released when the App Restriction feature was introduced as a part of Android for Work API in Android 5.0 Lollipop. AppRestrictionEnforcer demonstrates how to set restriction to other apps as a profile owner. AppRestrictionSchema defines some restrictions that can be controlled by AppRestrictionEnforcer. This update shows how to use 2 additional restriction types introduced in Android 6.0.


  • We hope you enjoy the updated samples. If you have any questions regarding the samples, please visit us on our GitHub page and file issues or send us pull requests.


    Labels: , ,

    Thursday, October 22, 2015

    Google Developers teams up with General Assembly to launch Android Development Immersive training course

    Posted by Peter Lubbers, Senior Program Manager, Google Developer Training


    Today at the Big Android BBQ we announced that we have teamed up with General Assembly (GA), a global education institution transforming thinkers into creators, to create a new Android Development Immersive training course. This 12-week, full-time course will be offered beginning in January 2016 at GA’s New York campus, and in February at GA’s San Francisco campus and will roll out to additional campuses over the course of the next year. It is the first in-person training program of its kind that Google Developers has designed and built.




    The Google Developer Relations team teamed up with General Assembly to ensure the Android Development Immersive bootcamp provides developers with access to the best instructors and latest and greatest hands-on material to create successful app experiences and businesses. To effectively reach over a billion of Android users globally, it's important for developers to build high-quality apps that are beautifully designed, performant, and delightful to use.


    “We are constantly looking at the economy and job market for what skills are most in-demand. Demand for developers who can address this market and build new applications is tremendous,” said Jake Schwartz, co-founder and CEO, General Assembly. “Developing this course in partnership with Google Developers allows us to provide students with the most relevant skills, ensuring a reliable pipeline of talented developers ready to meet the urgent demand of companies in the Android ecosystem, a key component of GA's education-to-employment model."


    Registration in the Android Development Immersive includes access to GA’s career preparation services and support, also known as Outcomes, includes assistance in creating portfolio-ready projects, access to career development workshops, networking events, and coaching and support in the job search process. Through in-person hiring events, mock interviews & GA’s online job search platform, graduates connect with GA’s hiring partners, which consists of close to 2,000 employers globally.


    One of these employers is Vice Media. "I'm really excited to see the candidates coming out of the GA Android course. The fact that they're working with both Google and potential employers to shape the curriculum around real-world problems will make a huge difference. Textbook learning is one thing, but classroom learning with practitioners is a level we have all been waiting for. In fact, Vice Media is going to be hiring an apprentice right out of this course," said Ben Jackson, Director of Mobile Apps for Vice Media.


    Learn more and sign up here.

    Labels:

    Get your bibs ready for Big Android BBQ!



    Posted by, Colt McAnlis, Senior Texas Based Developer Advocate


    We’re excited to be involved in the Big Android BBQ (BABBQ) this year because of one thing: passion! Just like BBQ, Android development is much better when passionate people obsess over it. This year’s event is no exception.




    Take +Ian Lake for example. His passion about Android development runs so deep, he was willing to chug a whole bottle of BBQ sauce just so we’d let him represent Android Development Patterns at the conference this year. Or even +Chet Haase, who suffered a humiliating defeat during the Speechless session last year (at the hands of this charming bald guy). He loves BBQ so much that he’s willing to come back and lose again this year, just so he can convince you all that #perfmatters. Let’s not forget +Reto Meier. That mustache was stuck on his face for days. DAYS! All because he loves Android Development so much.


    When you see passion like this, you just have to be part of it. Which is why this year’s BABBQ is jam packed with awesome Google Developers content. We’re going to be talking about performance, new APIs in Marshmallow 6.0, NDK tricks, and Wear optimization. We even have a new set of code labs so that folks can get their hands on new code to use in their apps.


    Finally, we haven’t even mentioned our BABBQ attendees, yet. We’re talking about people who are so passionate about an Android development conference that they are willing to travel to Texas to be a part of it!


    If BBQ isn’t your thing, or you won’t be able to make the event in person, the Android Developers and Google Developers YouTube channels will be there in full force. We’ll be recording the sessions and posting them to Twitter and Google+ throughout the event.


    So, whether you are planning to attend in person or watch online, we want you to remain passionate about your Android development.

    Labels: ,

    Friday, October 16, 2015

    Game Performance: Vertex Array Objects

    Previously, we showed how you can use vertex layout qualifiers to increase the performance and determinism of your OpenGL application. In this post, we’ll show another useful technique that will help you produce increased performance and cleaner code when drawing objects.


    Binding the vertex buffer




    Before drawing onto the screen, you need to bind your vertex data (e.g. positions, normals, UVs) to the corresponding vertex shader attributes. To do that, you need to bind the vertex buffer, enable the generic vertex attribute, and use glVertexAttribPointer to describe the layout of the buffer.


    Therefore, a draw call might look like this:


    const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
    const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
    const GLuint ATTRIBUTE_LOCATION_NORMALS     = 2;
    
    // Bind shader program, uniforms and textures
    // ...
    
    // Bind the vertex buffer
    glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );
    
    // Set the vertex attributes
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );
    
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );
    
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );
    
    // Draw elements
    glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );


    There are several reasons why we might not like this code very much. The first is that we need to cache the layout of the vertex buffer to enable and disable the right attributes before drawing. This means we are either hard-coding or saving some amount of data for a nearly meaningless task.


    The second reason is performance. Having to tell the drivers which attributes to individually activate is suboptimal. It would be best if we could precompile this information and deliver it all at once.


    Lastly, and purely for aesthetics, our draw call is cluttered by long boilerplate code. It would be nice to get rid of it.





    Did you know there is another reason why someone might frown on this code? The code is making use of layout qualifiers which is great! But, since it’s already using OpenGL ES 3+, it would be even better if the code also used Geometry Instancing. By batching many instances of a mesh into a single draw call, you can really boost performance.






    So how can we improve on the above code?


    Vertex Array Objects (VAOs)



    If you are using OpenGL ES 3 or higher, you should use Vertex Array Objects (or "VAOs") to store your vertex attribute state.


    Using a VAO allows the drivers to compile the vertex description format for repeated use. In addition, this frees you from having to cache the vertex format needed for glVertexAttribPointer, and it also results in less per-draw boilerplate code.


    Creating Vertex Array Objects



    The first thing you need to do is create your VAO. This is created once per mesh, alongside the vertex buffer object and is done like this:


    const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
    const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
    const GLuint ATTRIBUTE_LOCATION_NORMALS     = 2;
    
    // Bind the vertex buffer object
    glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );
    
    // Create a VAO
    GLuint vao;
    glGenVertexArrays( 1, &vao );
    glBindVertexArray( vao );
    
    // Set the vertex attributes as usual
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );
    
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );
    
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );
    
    // Unbind the VAO to avoid accidentally overwriting the state
    // Skip this if you are confident your code will not do so
    glBindVertexArray( 0 );


    You have probably noticed that this is very similar to our previous code section except that we now have the addition of:


    // Create a vertex array object
    GLuint vao;
    glGenVertexArrays( 1, &vao );
    glBindVertexArray( vao );


    These lines create and bind the VAO. All glEnableVertexAttribArray and glVertexAttribPointer calls after that are recorded in the currently bound VAO, and that greatly simplifies our per-draw procedure as all you need to do is use the newly created VAO.


    Using the Vertex Array Object



    The next time you want to draw using this mesh all you need to do is bind the VAO using glBindVertexArray.


    // Bind shader program, uniforms and textures
    // ...
    
    // Bind Vertex Array Object
    glBindVertexArray( vao );
    
    // Draw elements
    glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );


    You no longer need to go through all the vertex attributes. This makes your code cleaner, makes per-frame calls shorter and more efficient, and allows the drivers to optimize the binding stage to increase performance.







    Did you notice we are no longer calling glBindBuffer? This is because calling glVertexAttribPointer while recording the VAO references the currently bound buffer even though the VAO does not record glBindBuffer calls on itself.



    Want to learn more how to improve your game performance? Check out our Game Performance article series. If you are building on Android you might also be interested in the Android Performance Patterns.



    Labels:

    Android Support Library 23.1

    The Android Support Library is a collection of libraries available on a wide array of API levels that help you focus on the unique parts of your app, providing pre-built components, new functionality, and compatibility shims.


    With the latest release of the Android Support Library (23.1), you will see improvements across the Support V4, Media Router, RecyclerView, AppCompat, Design, Percent, Custom Tabs, Leanback, and Palette libraries. Let’s take a closer look.


    Support V4



    The Support V4 library focuses on making supporting a wide variety of API levels straightforward with compatibility shims and backporting specific functionality.


    NestedScrollView is a ScrollView that supports nested scrolling back to API 4. You’ll now be able to set a OnScrollChangeListener to receive callbacks when the scroll X or Y positions change.


    There are a lot of pieces that make up a fully functioning media playback app, with much of it centered around MediaSessionCompat. A media button receiver, a key part to handling playback controls from hardware or bluetooth controls, is now formalized in the new MediaButtonReceiver class. This class makes it possible to forward received playback controls to a Service which is managing your MediaSessionCompat, reusing the Callback methods already required for API 21+, centralizing support on all API levels and all media control events in one place. A simplified constructor for MediaSessionCompat is also available, automatically finding a media button receiver in your manifest for use with MediaSessionCompat.


    Media Router



    The Media Router Support Library is the key component for connecting and sending your media playback to remote devices, such as video and audio devices with Google Cast support. It also provides the mechanism, via MediaRouteProvider, to enable any application to create and manage a remote media playback device connection.


    In this release, MediaRouteChooserDialog (the dialog that controls selecting a valid remote device) and MediaRouteControllerDialog (the dialog to control ongoing remote playback) have both received a brand new design and additional functionality as well. You’ll find the chooser dialog sorts devices by frequency of use and includes a device type icon for easy identification of different devices while the controller dialog now shows current playback information (including album art).




    To feel like a natural part of your app, the content color for both dialogs is now based on the colorPrimary of your alert dialog theme:


    <!-- Your app theme set on your Activity -->
    <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
      <item name="colorPrimary">@color/primary</item>
      <item name="colorPrimaryDark">@color/primaryDark</item>
      <item name="alertDialogTheme">@style/AppTheme.Dialog</item>
    </style>


    <!-- Theme for the dialog itself -->
    <style name="AppTheme.Dialog" parent="Theme.AppCompat.Light.Dialog.Alert">
      <item name="colorPrimary">@color/primary</item>
      <item name="colorPrimaryDark">@color/primaryDark</item>
    </style>



    RecyclerView



    RecyclerView is an extremely powerful and flexible way to show a list, grid, or any view of a large set of data. One advantage over ListView or GridView is the built in support for animations as items are added, removed, or repositioned.


    This release significantly changes the animation system for the better. By using the new ItemAnimator’s canReuseUpdatedViewHolder() method, you’ll be able to choose to reuse the existing ViewHolder, enabling item content animation support. The new ItemHolderInfo and associated APIs give the ItemAnimator the flexibility to collect any data it wants at the correct point in the layout lifecycle, passing that information into the animate callbacks.


    Note that this new API is not backward compatible. If you previously implemented an ItemAnimator, you can instead extend SimpleItemAnimator, which provides the old API by wrapping the new API. You’ll also notice that some methods have been entirely removed from ItemAnimator. For example, if you were calling recyclerView.getItemAnimator().setSupportsChangeAnimations(false), this code won’t compile anymore. You can replace it with:


    ItemAnimator animator = recyclerView.getItemAnimator();
    if (animator instanceof SimpleItemAnimator) {
      ((SimpleItemAnimator) animator).setSupportsChangeAnimations(false);
    }


    AppCompat



    One component of the AppCompat Support Library has been in providing a consistent set of widgets across all API levels, including the ability to tint those widgets to match your branding and accent colors.


    This release adds tint aware versions of SeekBar (for tinting the thumb) as well as ImageButton and ImageView (providing backgroundTint support) which will automatically be used when you use the platform versions in your layouts. You’ll also find that SwitchCompat has been updated to match the styling found in Android 6.0 Marshmallow.


    Design



    The Design Support Library includes a number of components to help implement the latest in the Google design specifications.


    TextInputLayout expands its existing functionality of floating hint text and error indicators with new support for character counting.


    AppBarLayout supports a number of scroll flags which affect how children views react to scrolling (e.g. scrolling off the screen). New to this release is SCROLL_FLAG_SNAP, ensuring that when scrolling ends, the view is not left partially visible. Instead, it will be scrolled to its nearest edge, making fully visible or scrolled completely off the screen. You’ll also find that AppBarLayout now allows users to start scrolling from within the AppBarLayout rather than only from within your scrollable view - this behavior can be controlled by adding a DragCallback.



    NavigationView provides a convenient way to build a navigation drawer, including the ability to creating menu items using a menu XML file. We’ve expanded the functionality possible with the ability to set custom views for items via app:actionLayout or using MenuItemCompat.setActionView().


    Percent



    The Percent Support Library provides percentage based dimensions and margins and, new to this release, the ability to set a custom aspect ratio via app:aspectRatio. By setting only a single width or height and using aspectRatio, the PercentFrameLayout or PercentRelativeLayout will automatically adjust the other dimension so that the layout uses a set aspect ratio.


    Custom Tabs



    The Custom Tabs Support Library allows your app to utilize the full features of compatible browsers including using pre-existing cookies while still maintaining a fast load time (via prefetching) and a custom look and actions.


    In this release, we’re adding a few additional customizations such as hiding the url bar when the page is scrolled down with the new enableUrlBarHiding() method. You’ll also be able to update the action button in an already launched custom tab via your CustomTabsSession with setActionButton() - perfect for providing a visual indication of a state change.


    Navigation events via CustomTabsCallback#onNavigationEvent() have also been expanded to include the new TAB_SHOWN and TAB_HIDDEN events, giving your app more information on how the user interacts with your web content.


    Leanback




    The Leanback library makes it easy to build user interfaces on TV devices. This release adds GuidedStepSupportFragment for a support version of GuidedStepFragment as well as improving animations and transitions and allowing GuidedStepFragment to be placed on top of existing content.


    You’ll also be able to annotate different types of search completions in SearchFragment and staggered slide transition support for VerticalGridFragment.


    Palette



    Palette, used to extract colors from images, now supports extracting from a specific region of a Bitmap with the new setRegion() method.


    SDK available now!



    There’s no better time to get started with the Android Support Library. You can get started developing today by updating the Android Support Repository from the Android SDK Manager.


    For an in depth look at every API change in this release, check out the full API diff.


    To learn more about the Android Support Library and the APIs available to you through it, visit the Support Library section on the Android Developer site.


    Labels: , ,