Despite the existence of a large number of distributed file systems (Riak CS, MongoDB GridFS, Cassandra File System, and others), many continue to give their preference service Amazon S3. That is not surprising, given its flexibility, reasonable cost, no need for anything to administer and the presence of a powerful SDK. In this article we will show how to work with Amazon S3 using this SDK programming language Scala.

Before proceeding directly to the programming, create a small test environment through the console AWS. Find in her service S3, open, click «Create Bucket». You will be required to select the region that will host the bucket and enter the name of the bucket. Names of buckets in S3 are global, so do not be surprised if the name «test-bucket» already be occupied by another user 🙂 Also soon we will need the ID of the selected region. Open the bucket, click Properties (right), locate the Static Website Histing, and in it – endpoint:

The name of the region is part of the endpoint and the screenshot underlined in red. I have a region called «us-west-1″ you, it can be the same or something similar, depending on the region you selected when creating the bucket. Or, the same can be even easier to learn by looking closely at the URL in your browser:

https://console.aws.amazon.com/s3/home?region=us-west-1

To work with S3 of Scala we need a user with appropriate privileges. In the console, find AWS service IAM, create a new user. Once a user is created, you will see something like this:

Save anywhere Access Key ID and Secret Access Key, they need us soon.

In the properties of the user locate the Managed Policies, and then click Attach Policy. In the list is the name of the rulebase AmazonS3FullAccess, set the checkbox next to it, click the Attach Policy. So, we have a bucket in S3 and user with full access to the service S3. Now we can poprogat:

package me.eax.s3_example

import com.amazonaws ._
import com.amazonaws.auth ._
import com.amazonaws.services.s3 ._
import com.amazonaws. services.s3.model ._
import java.io ._

object AmazonS3Example extends App {
val accessKey = “key”
val secretKey = “key”
val bucketName = “eaxme-test”
// this is where it took the name of the region:
val urlPrefix = “https://s3-us-west-1.amazonaws.com”

val credentials = new BasicAWSCredentials (accessKey, secretKey)
val client = new AmazonS3Client (credentials)

def uploadToS3 (fileName: String, uploadPath: String): String = {
client.putObject (bucketName, uploadPath, new File (fileName))

val acl = client.getObjectAcl (bucketName, uploadPath)
acl.grantPermission (GroupGrantee.AllUsers, Permission.Read)
client.setObjectAcl (bucketName, uploadPath, acl)

s “$ urlPrefix / $ bucketName / $ uploadPath”
}

def fileIsUploadedToS3 (uploadPath: String): Boolean = {
try {
client.getObjectMetadata ( bucketName, uploadPath)
true
} catch {
case e: AmazonServiceException if e.getStatusCode == 404 = & gt;
false
}
}

def downloadFromS3 (uploadPath: String, downloadPath: String) {
if (! fileIsUploadedToS3 (uploadPath)) {
throw new RuntimeException (s “File $ uploadPath is not uploaded!”)
}
client. getObject (new GetObjectRequest (bucketName, uploadPath), [1,999,024] new File (downloadPath))
}

if (args.length & lt; 2) {
println (“Usage: prog.jar file.dat s3 / upload / path.dat” +
“local / download / path.dat”)
} else {
val Array (fileName, uploadPath, downloadPath, _ *) = args
println (s “Uploading $ fileName …”)

val url = uploadToS3 (fileName, uploadPath)
println (s “Uploaded : $ url “)

downloadFromS3 (uploadPath, downloadPath)
println (s” Downloaded: $ downloadPath “)
}
}

This program uploads the specified file in S3, and then download it back. As you can see, it’s pretty simple – start up the client, the client has the methods getObject and putObject. By default, files uploaded to S3, available outside the direct link. Therefore, in this example, we explicitly affix file permissions using the method setObjectAcl. Program output:

Uploading /home/eax/temp/doge.jpg…
Uploaded: https://s3-us-west-1.amazonaws.com/eaxme-test/path/to/doge.jpg [ 1999024] Downloaded: /tmp/downloaded-doge.jpg

You can verify that the file is a direct link to download that, according to AWS console, it really is in the bucket with the correct permissions, and that the downloaded file is no different from injected. That’s all, no special magic to work with Amazon S3 does not. 

(Visited 3,281 times, 2 visits today)
adminTips
Despite the existence of a large number of distributed file systems (Riak CS, MongoDB GridFS, Cassandra File System, and others), many continue to give their preference service Amazon S3. That is not surprising, given its flexibility, reasonable cost, no need for anything to administer and the presence of a...

Do you want to be notified about new DBA updates, releases, jobs and free tips? Join our email newsletter. It's fast and easy. You will be among the first to know about hot new DBA updates and stuff, it will help you enhance your DBA skills.
We take your privacy very seriously