Showing posts with label r. Show all posts
Showing posts with label r. Show all posts

Saturday, September 23, 2017

googleAnalyticsR A new R package for the Analytics Reporting API V4

googleAnalyticsR A new R package for the Analytics Reporting API V4


Hello, Im Mark Edmondson and I have the honour of being a Google Developer Expert for Google Analytics, a role that looks to help developers get the most out of Google Analytics. My specialities include Google APIs and data programming, which has prompted the creation of googleAnalyticsR, a new R package to interact with the recently released Google Analytics Reporting API V4.

R is increasingly popular with web analysts due to its powerful data processing, statistics and visualisation capabilities. A large part of R�s strength in data analysis comes from its ever increasing range of open source packages. googleAnalyticsR allows you to download your Google Analytics data straight into an R session, which you could then use with other R packages to create insight and action from your data.

As well as v3 API capabilities, googleAnalyticsR also includes features unique to v4:
  •  On the fly calculated metrics 
  • Pivot reports 
  • Histogram data 
  • Multiple and more advanced segments 
  • Multi-date requests 
  • Cohorts 
  • Batched reports 
The library will also take advantage of any new aspects of the V4 API as it develops.

Getting started

To start using googleAnalyticsR, make sure you have the latest versions of R and (optionally) the R IDE, RStudio

Start up RStudio, and install the package via:

install.packages("googleAnalyticsR")

This will install the package on your computer plus any dependencies.

After successful installation, you can load the library via library(googleAnalyticsR), and read the documentation within R via ?googleAnalyticsR, or on the package website.

An example API call � calculated metrics

Once installed, you can get your Google Analytics data similarly to the example below, which fetches an on-the-fly calculated metric:

library(googleAnalyticsR)

# authenticate with your Google Analytics login
ga_auth()

# call google analytics v4
ga4 <- google_analytics_4(viewId = 123456,
                         date_range = c("2016-01-01",
                                       "2016-06-01"),
                         metrics = c(calc1=ga:sessions /
                                            ga:users),
                         dimensions = medium)


See more examples on the v4 help page.

Segment Builder RStudio Addin

One of the powerful new features of the v4 API is enhanced segmentation, however they can be complicated to configure. To help with this, an RStudio Addin has been added which gives you a UI within RStudio to configure the segment object. To use, install the library in RStudio then select the segment builder from the Addin menu. ?

Create your own Google Analytics 

Dashboards googleAnalyticsR has been built to be compatible with Shiny, a web application framework for R.  It includes functions to make Google Analytics dashboards as easy as possible, along with login functions for your end users. ?

Example code for you to create your own Shiny dashboards is on the website.

BigQuery Google Analytics 360 exports 

In addition to the v4 and v3 API functions, BigQuery exports from Google Analytics 360 can also be directly queried, letting you download millions of rows of unsampled data.

Aimed at analysts familiar with Google Analytics but not SQL, it creates the SQL for you to query common standard metrics and dimensions, using a similar interface as the API calls.  See the BigQuery section on the website for more details.

Anti-sampling 

To more easily fetch non-sampled data, googleAnalyticsR also features an anti-sampling flag which splits the API calls into self-adjusting time windows that are under the session sampling limit.  The approach used is described in more detail here.

Get involved 

If you have any suggestions, bug reports or have any ideas you would like to contribute, then you are very welcome to raise an issue or submit a pull request at the googleAnalyticsR Github repository, or ping me on Twitter at @HoloMarkeD.

Posted by Mark Edmondson, Google Developer Expert


download file now

Read more »

Tuesday, September 19, 2017

URLs U R Loaded with Information

URLs U R Loaded with Information


In my early days of forensics, I considered URLs in web histories as nothing more than addresses to websites, and strictly speaking, that’s true. But URLs often contain form information supplied by the user and other artifacts that can be relevant to an investigation, too. Most of us in the business know this already, at least it concerns one commonly sought after ingot: the web search term.

Consider the following URL:

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

Most examiners would key in on the domain google.com and the end of the url, q=linuxsleuthing, and conclude this was a Google search for the term "linuxsleuthing", and they’d be right. But is there anything else to be gleaned from the URL? Just what do all those strings and punctuation mean, anyway?

What’s in a URL

Let’s use the URL above as our discussion focus. I’ll break down each element, and I’ll mention at least one value of the element to the forensic investigator (you may find others). Finally, I’ll identify and demonstrate a Python library to quickly dissect a URL into its constituent parts.

Protocol

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

The URL starts with the protocol, the "language" the browser must speak to communicate with the resource. In the Python urllib module that I will introduce later, the protocol is referred to as the "scheme".

Examples:

  • http: - Internet surfing

  • https: - Secure Internet surfing

  • ftp: - File transfer operations

  • file: - Local file operations

  • mailto: - Email operations

The forensics value of a protocol is that it clues you into the nature of the activity occurring at that moment with the web browser.

Domain

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

The domain can be thought of as the place "where the resource lives." Technically, it can consist of three parts: the top-level domain (TLD), second-level domain, and the host name (or subdomain). If you are more interested in those terms, I’ll leave it to you to research. Suffice it to say that we think of it as the "name" of the website, and with good reason. The names exist in this form because they can be easily memorized and recognized by humans. You may also encounter the domains evil twin in a URL, the Internet Protocol (IP) address, which domain names represent.

The Python urllib module referes to the domain as the "netloc" and identifies it by the leading "//", which is the proper introduction according to RFC 1808.

The forensic value of a domain is that you know where the resource defined in the remainder of the URL can be found or was located in the past.

Port

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

The port is not listed in this url, nor is it often included in URLs intended for human consumption. However, if you see something like www.google.com:80, the ":80" indicates communication is occurring across port 80. You’ll often see port numbers for URLs to video servers, but port numbers are by no means limited to such uses. The Python urllib module incorporates the port in the "netloc" attribute.

The chief forensic value of a port is that it can clue you into the type of activity occurring on the domain because many port numbers are well known and commonly used for specific tasks.

Path

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

In terms of a web server, the path indicates the path to the resource on the server. If the "file:" protocol is seen in the URL, then the path signifies the logical location of the file on the local machine. In fact, there will not be a domain, though the domain preamble is present, which is why you see three forward slashes for a file:

file:///path.

The Python urllib module also uses the name "path" to describe this hierarchal path on the server. Please understand that both hard paths and relative paths are possible. In addition, Python describes "params" for the last path element which are introduced by a semicolon. This should not be confused with the parameters I describe in the next section.

The principle forensic value of the path is the same as the over riding principle of real estate: location, location, location.

Parameters

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

Parameters are information passed to the web server by the browser. They are also referred to as "query strings". Parameters can include environment information, web form data, window size, and anything else the web site is coded to pass on. Parameter strings are indicated by a leading "?" followed by key:value pairs. Multiple parameters are separated by "&". Python calls parameters the "query."

Consider our sample URL. It can be seen to have four parameters:

  • sourceid=chrome-instant

  • ion=1

  • espv=2

  • ie=UTF-8

Parameters are really the meat and potatoes of URL analysis, in my opinion. It is here I find the most interesting details: the user name entered on the previous web page; in the case of mobile devices, the location of the device (lat/lon) when the Facebook post was made; the query on the search engine, etc.

Despite what I said in the preceding paragraph, note that query string is not present the case of our sample URL. The search was conducted through the Google Chrome browser address bar (sourceid=chrome-instant). Thus, it is not safe to assume that all search engine search terms or web form data are to be found in the URL parameters.

To throw a little more mud on the matter, consider that the entry point of the search and the browser make a difference in the URL:

Search for linuxsleuthing from the Ubuntu start page, FireFox
https://www.google.com/search?q=linuxsleuthing&ie=UTF-8&sa=Search&channel=fe&client=browser-ubuntu&hl=en&gws_rd=ssl

Here, we see the same search, but different parameters:

  • q=linuxsleuthing

  • ie=UTF-8

  • sa=Search

  • channel=fe

  • client=browser-ubuntu

  • hl=en

  • gws_rd=ssl

Caution
Parameters will mean different things to different sites. There is no "one-definition fits all" here, even if there be obvious commonality. It will take research and testing to know the particular meaning of any given parameter even though it may appear obvious on its face.

Anchor

https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing

The anchor links to some location within the web page document itself. If you’ve ever clicked a link and found yourself halfway down a page, then you understand the purpose of the anchor. Somewhere in the html code of that page is a bookmark of sorts to which that anchor points. Python calls the anchor a "fragment."

In the case of our sample URL, the anchor is the search term I entered in the address bar of the Google Chrome browser.

The forensics value of an anchor is that you know what the user saw or should have seen when at that site. It might demonstrate a user interest or that they had knowledge of a fact, depending on your particular circumstances, of course.

Making Short Work of URL Parsing

Python includes a library for manipulating URLs named, appropriately enough, urllib. The python library identifies the components of a URL a little more precisely than I described above, which was only intended as an introduction. By way of quick demonstration, we’ll let Python address our sample URL

iPython Interative Session, Demonstrating urllib
In [1]: import urllib

In [2]: result = urllib.parse.urlparse(https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing)

In [3]: print(result)
ParseResult(scheme=https, netloc=www.google.com, path=/webhp, params=, query=sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8, fragment=q=linuxsleuthing)

In [4]: result.query
Out[4]: sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8

In [5]: result.query.split(&)
Out[5]: [sourceid=chrome-instant, ion=1, espv=2, ie=UTF-8]

In [6]: result.fragment
Out[6]: q=linuxsleuthing
Note
The Python urllib calls the parameters I discussed a query and the anchor a fragment.

If you have a little Python knowledge, then you can see how readily you could parse a large list of urls. If not, it is not much more difficult to parse a url using BASH.

Parsing URLs using BASH variable substitution

$ url="https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=linuxsleuthing"
$ anchor=${url##*#}
$ parameters=${url##*?}
$ parameters=${parameters//#$anchor/}
$ echo ${parameters//&/ }
sourceid=chrome-instant ion=1 espv=2 ie=UTF-8
$ echo $anchor
q=linuxsleuthing

Finding Parameters

If you want to narrow your search for URLs containing parameters and anchors, you need only grep your list for the "&" or "#" characters. If you are processing a history database such as the Google Chrome History SQLite database, you can export the relevant urls with the following query:

SQLite query for Google Chrome History
select * from urls where url like "%?%" or url like "%#%";

What’s All the Fuss?

So, why go to all this length to study a URL? I’ll give two simple illustrations:

In the first case, I had the computer of a person suspected of drug dealing. I found little relevant data on his computer doing basic analysis, including an analysis of search engine search terms. When I examined URL parameters, however, I found searches at website vendors that demonstrated the purchase of materials for growing marijuana.

In the second case, a stolen computer was recovered in close proximity to a suspect who claimed to have no knowledge of the device. The Google Chrome browser in the guest account was used since the date of the theft, so analysis was in order. URL parameters showed a login to the suspect’s Apple account 12 hours after the left. There was no useful data in the cache, only the URL history.

Finally, bear in mind that the URL history is the only artifact you may have of secure website activity. Browsers, by default, do not cache secure elements. Understanding the contents of a URL can clue you into activity for which may find no other artifacts.

It is good to know what’s in a URL!



download file now

Read more »

Friday, September 8, 2017

ETS2 Mod Mega Mod Para Scania R V 6 5 Para V 1 28 X By Bogdan Kasalap

ETS2 Mod Mega Mod Para Scania R V 6 5 Para V 1 28 X By Bogdan Kasalap


Clique Nas Imagens Para Ampliar
Tamanho do Arquivo: 270,7 MB
Testado e Aprovado
Vers�o: 1.28 (Teste em Outra Vers�o)
Substitui o Caminh�o: Scania
Game: Euro Truck 2

Edit Mod: Bogdan Kasalap
Apdata��o P/ 1.27: Phantom94, Vovangt4 e LH Trucker
Imagens: Bogdan Kasalap

Aten��o: Extraia com o programa chamado 7-Zip!

ShareMods


Fonte: Blog Euro Truck 2  


download file now

Read more »

Sunday, September 3, 2017

Empresa oferece mais de R 1 5 milhão a quem conseguir hackear WhatsApp

Empresa oferece mais de R 1 5 milhão a quem conseguir hackear WhatsApp




A privacidade das suas mensagens no WhatsApp vale muito. Tanto que a empresa Zerodium, que � especializada em comprar falhas graves em sistemas e aplicativos populares, anunciou ontem um pr�mio de at� US$ 500 mil (R$ 1,57 milh�o) para quem conseguir hackear o aplicativo e acessar as mensagens de terceiros sem que eles saibam.
Mais especificamente, o que a Zerodium est� buscando s�o falhas "zero day" (as que exigem corre��o mais urgente) para o app. Essas falhas precisam permitir execu��o remota de c�digo e escala��o local de privil�gio. Al�m do WhatsApp, entram nessa lista do pr�mio outros aplicativos de mensagem, como o Telegram, o iMessage, o Signal e o Messenger.

O efeito de falhas como as que a empresa est� buscando, segundo o Mashable, seria que um agente externo poderia acessar as mensagens de uma v�tima sem que ela soubesse. O an�ncio foi feito pela empresa por meio de um tweet, que pode ser visto abaixo:

Suas mensagens no mercado

Normalmente, pr�mios por falhas s�o boas not�cias. Empresas como o Google, a Apple e a Microsoft oferecem recompensas a pesquisadores que descobrirem falhas em seus sistemas para incentivar que eles divulguem as brechas e, assim, ajudem a torn�-los mais seguros para os usu�rios. A Zerodium, no entanto, � uma empresa privada que pretende comprar essas falhas e vend�-las para quem estiver disposto a pagar pelo conhecimento necess�rio para hackear os aplicativos.

Em seu site, a empresa informa que seus clientes incluem "empresas gigantes de defesa, tecnologia e finan�as que necessitem de prote��o avan�ada contra zero-day, bem como ag�ncias governamentais que precisem de capacidades de seguran�a espec�ficas". Em outras palavras, apenas corpora��es e governos que paguem � Zerodium podem ter acesso �s falhas que a empresa descobre.

Fora os aplicativos de mensagem, a empresa tamb�m tem muitos outros sistemas na mira. Ela j� pagou mais de US$ 1 milh�o (R$ 3,13 milh�es) a hackers que conseguiram desbloquear remotamente um iPhone com o iOS 9. O sistema operacional da Apple continua entre aqueles que a empresa mais est� disposta a hackear, e quem conseguir achar falhas semelhantes pode report�-las � Zerodium para ganhar at� US$ 1,5 milh�o (R$ 4,7 milh�es).

O que fazer?

Como o pr�prio Mashable ressalta, o fato de que falhas no WhatsApp tenham entrado no "card�pio" da Zerodium pode ser um bom sinal. Isso provavelmente indica que a empresa ainda n�o tem conhecimento de brechas nesses aplicativos, e que, por enquanto, eles est�o seguros.

Por outro lado, o mais prov�vel � que a exist�ncia desses pr�mios leve hackers a trabalhar em ritmo dobrado para encontrar maneiras de violar os aplicativos. A melhor maneira de se proteger, nesse caso, � manter os apps e os sistemas operacionais t�o atualizados quanto poss�vel.

Quem chegou a "cantar a bola" sobre a situa��o foi o criador do Telegram, Pavel Durov. Em junho, ele contou que foi pressionado pelo FBI para criar um "backdoor" para seu pr�prio aplicativo, e que agentes do governo dos EUA tentaram subornar os desenvolvedores do app em duas ocasi�es. Ele chegou a alegar que apostava US$ 1 milh�o que o protocolo de seguran�a usado pelo WhatsApp seria hackeado nos pr�ximos cinco anos.

Fonte: Olhar Digital


download file now

Read more »

Friday, September 1, 2017

Download NOKIA 205 R 862 Latest Version V4 71 Flash Files

Download NOKIA 205 R 862 Latest Version V4 71 Flash Files


Latest version of flash files of NOKIA 205 Version 4.71 basic pack of three files MCU,PPM and CNT of any flashing tool for Nokia mobiles.
so if you want to flash NOKIA 205 with latest version flash files then download and enjoy. you can flash these files with Infinity Nokia Best or ATF(Advance Turbo Flasher).
The product code of files is 059W3F9.
We also add on the pack info file of Nokia 205 you dont need to select files one by one just select RM-862 file will added automatically.



Nokia 205 simple phone with a lot of feathers and Nokia giving updates of this phone giving update is the latest update form Nokia.
On these files PPM is Indian language,s version if you want to Nokia 205 Hindi filash files  latest version then it is for you.
So download and enjoy the latest version for Nokia 205.

Download form Mega


download file now

Read more »

Tuesday, March 31, 2015

A R E S Extinction Agenda EX Full Crack

Download A.R.E.S Extinction Agenda EX

Download A.R.E.S Extinction Agenda EX Full iSO - The game takes players on a thrilling sci-fi adventure! Take control of combat specialist Ares, or the new playable character, Tarus, to battle deadly machines with a variety of powerful weapons and armor. A.R.E.S Extinction Agenda EX Free Download, A.R.E.S Extinction Agenda EX Full Direct Download, A.R.E.S Extinction Agenda EX PC game mirror.

Recommended System:

OS: Win7, Win 8, Win Vista, Windows XP, Win8.1
Processor: Intel Dual Core (or higger)
Memory: 4 GB RAM
Graphics: Nvidia Geforce 560 Ti (or ATI equivalent) + DirectX Release
Network: Broadband Internet connection if online player
Hard Drive: 3 GB available space
Sound Card: Any compatible soundcard
Size game : 531 MB

Download | A.R.E.S Extinction Agenda EX Full Version
PC Games A.R.E.S Extinction Agenda EX

Download A.R.E.S Extinction Agenda EX
Status | Tested & Played (Windows 7)
Read more »

Baixar F E A R 2 pc torrent

F.E.A.R. 2 pc download torrent:

Download Magnet torrent

Download Tradução

Créditos da tradução:

Game vicio


Descrição:

F.E.A.R. 2: Project Origin é um jogo de tiro e terror psicológico em primeira pessoa, desenvolvido pela Monolith Productions e publicado pela Warner Bros para o Microsoft Windows, PlayStation 3 e Xbox 360. É uma sequência de F.E.A.R. e foi lançado em 10 de fevereiro de 2009. Foi disponibilizado no Steam em 12 de fevereiro de 2009.

Informações:

Desenvolvedora : Monolith Productions
Distribuidora : Warner Bros. IE
Data de Lançamento : 10 Fevereiro 2009
Número de Jogadores : 1 Jogador

Requisitos Mínimos:

* CPU: P4 2.8GHz (3.2GHz Vista)/Athlon 64 3000+ (3200+ Vista)
* GPU: Fully DX9-compliant graphics card with 256MB (SM 2.0b).
   NVidia 6800 or ATI X700.
* Memory: 1GB (1.5GB Vista)
* HDD: 12GB
* OS: Windows XP SP2/Vista SP1
* DirectX: 9.0c
* Sound: DX9.0c compliant
* Optical drive: DVD (boxed only)
* Internet: Broadband
Read more »

Thursday, March 12, 2015

R Type Command USA

GAME NAME
R-Type Command
LANGUAGE
English
RELEASE DATE
May 6, 2008
GENRE
Strategy
VIDEO GAME
Link Youtube
DOWNLOAD GOOGLE DRIVE
In a desperate war against the mysterious alien race known as the Bydo, humanity sends wave after wave of fighters into Bydo space -- none of which are ever heard from again. Mankinds main hope now resides with a lone commander, sent to lead a small armada on a perilous mission into the heart of the Bydo Empire. Low on fuel and forced to scavenge resources and equipment from his surroundings, the commander must use all his cunning and wits if he hopes to succeed, let alone make it home alive.
Read more »