Home » Posts tagged 'Languages'
Tag Archives: Languages
- Assigning Objects
We can create new data objects and variables quite easily within R. We use the = or the → operator to denote assigning an object to it’s name. For the purpose of this article we will use = to assign objectnames and objects. This is very useful when we are doing data manipulation as we can reuse the manipulated data as inputs for other steps in our analysis.
Types of Data Objects in R
A list is simply a collection of data. We create a list using the c operator.
The following code creates a list named numlist from 6 input numeric data
The following code creates a list named charlist from 6 input character data
The following code creates a list named mixlistfrom both numeric and character data.
Matrix is a two dimensional collection of data in rows and columns, unlike a list which is basically one dimensional. We can create a matrix using the matrix command while specifying the number of rows by nrow and number of columns by ncol paramter.
In the following code , we create an matrix named ajay and the data is input in 3 rows as specified, but it is entered into first column, then second column , so on.
[,1] [,2] [,3]
[1,] 1 4 12
[2,] 2 5 18
[3,] 3 6 24
However please note the effect of using the byrow=T (TRUE) option. In the following code we create an matrix named ajay and the data is input in 3 rows as specified, but it is entered into first row, then second row , so on.
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
[3,] 12 18 24
- Data Frames
A data frame is a list of variables of the same number of rows with unique row names. The column names are the names of the variables.
Big Data for Big Brother. Now playing. At a computer near you. How to help water the tree of liberty using statistics?
or use SAS software
SAS/CIA from the last paragraph of
- I didn’t learn the SAS Macro Language enough. SAS Macros are cool, and fast. Ditto for arrays. or ODS.
- Not keeping up with the changes in Version 9+. Especially the hash method.(Why name a technique after a recreational drug, most unfair)
- Not studying more statistics theory.
- Flunking SAS Certification Twice.
- Not making enough money because customers need a solution not a p value.
- There is no Proc common sense. There is no Proc Clean the Data.
- No Macros to automate the model. Here is dirty data. There is clean model. Wait till version 16.
- Not getting selected by SAS R & D.Not applying to SAS R & D.
- Google has better voice recognition for typing notes. No Voice Recognition in SAS langvuage to type syntax.
- Enhanced Editor and EG are both idiotic junk pushed by Marketing!
Inspired by true events at
The 3.0 Era for R starts today! Changes include better Big Data support.
Read the NEWS here
install.packages()has a new argument
quietto reduce the amount of output shown.
- New functions
citeNatbib()have been added, to allow generation of in-text citations from
cite()function may be added to
merge()works in more cases where the data frames include matrices. (Wish of PR#14974.)
sample.int()has some support for n >= 2^31: see its help for the limitations.A different algorithm is used for
(n, size, replace = FALSE, prob = NULL)for
n > 1e7and
size <= n/2. This is much faster and uses less memory, but does give different results.
dir()) gains a new optional argument
no..which allows to exclude
- Profiling via
Rprof()now optionally records information at the statement level, not just the function level.
"license/restricts_use"filter which retains only packages for which installation can proceed solely based on packages which are guaranteed not to restrict use.
- File ‘share/licenses/licenses.db’ has some clarifications, especially as to which variants of ‘BSD’ and ‘MIT’ is intended and how to apply them to packages. The problematic licence ‘Artistic-1.0’ has been removed.
hist.default()can now be a function that returns the breakpoints to be used (previously it could only return the suggested number of breakpoints).
This section applies only to 64-bit platforms.
- There is support for vectors longer than 2^31 – 1 elements. This applies to raw, logical, integer, double, complex and character vectors, as well as lists. (Elements of character vectors remain limited to 2^31 – 1 bytes.)
- Most operations which can sensibly be done with long vectors work: others may return the error ‘long vectors not supported yet’. Most of these are because they explicitly work with integer indices (e.g.
match()) or because other limits (e.g. of character strings or matrix dimensions) would be exceeded or the operations would be extremely slow.
length()returns a double for long vectors, and lengths can be set to 2^31 or more by the replacement function with a double value.
- Most aspects of indexing are available. Generally double-valued indices can be used to access elements beyond 2^31 – 1.
- There is some support for matrices and arrays with each dimension less than 2^31 but total number of elements more than that. Only some aspects of matrix algebra work for such matrices, often taking a very long time. In other cases the underlying Fortran code has an unstated restriction (as was found for complex
dist()can produce dissimilarity objects for more than 65536 rows (but for example
hclust()cannot process such objects).
serialize()to a raw vector is unlimited in size (except by resources).
- The C-level function
R_alloccan now allocate 2^35 or more bytes.
grep()will return double vectors of indices for long vector inputs.
- Many calls to
.C()have been replaced by
.Call()to allow long vectors to be supported (now or in the future). Regrettably several packages had copied the non-API
.C()calls and so failed.
.Fortran()do not accept long vector inputs. This is a precaution as it is very unlikely that existing code will have been written to handle long vectors (and the R wrappers often assume that
length(x)is an integer).
- Most of the methods for
sort()work for long vectors.
order()support long vectors (slowly except for radix sorting).
sample()can do uniform sampling from a long vector.
- More use has been made of R objects representing registered entry points, which is more efficient as the address is provided by the loader once only when the package is loaded.
This has been done for packages
tcltk: it was already in place for the other standard packages.
Since these entry points are always accessed by the R entry points they do not need to be in the load table which can be substantially smaller and hence searched faster. This does mean that
.Callcalls copied from earlier versions of R may no longer work – but they were never part of the API.
.Call()calls in package base have been migrated to
solve()makes fewer copies, especially when
bis a vector rather than a matrix.
eigen()makes fewer copies if the input has dimnames.
- Most of the linear algebra functions make fewer copies when the input(s) are not double (e.g. integer or logical).
- A foreign function call (
.C()etc) in a package without a
PACKAGEargument will only look in the first DLL specified in the ‘NAMESPACE’ file of the package rather than searching all loaded DLLs. A few packages needed
@<-operator is now implemented as a primitive, which should reduce some copying of objects when used. Note that the operator object must now be in package base: do not try to import it explicitly from package methods.
SIGNIFICANT USER-VISIBLE CHANGES
- Packages need to be (re-)installed under this version (3.0.0) of R.
- There is a subtle change in behaviour for numeric index values 2^31 and larger. These never used to be legitimate and so were treated as
NA, sometimes with a warning. They are now legal for long vectors so there is no longer a warning, and
x[2^31] <- ywill now extend the vector on a 64-bit platform and give an error on a 32-bit one.
- It is now possible for 64-bit builds to allocate amounts of memory limited only by the OS. It may be wise to use OS facilities (e.g.
csh), to set limits on overall memory consumption of an R process, particularly in a multi-user environment. A number of packages need a limit of at least 4GB of virtual memory to load.
64-bit Windows builds of R are by default limited in memory usage to the amount of RAM installed: this limit can be changed by command-line option –max-mem-size or setting environment variable R_MAX_MEM_SIZE.
So I finally got my test plan accepted for a 1 month trial to the Oracle Public Cloud at https://cloud.oracle.com/ .
Some initial thoughts- this Java cloud seemed more suitable for web apps, than for data science ( but I have to spend much more time on this).
I really liked the help and documentation and tutorials, Oracle has invested a lot in it to make it friendly to enterprise users.
Hopefully the Oracle R Enterprise ORE guys can talk to the Oracle Cloud department and get some common use case projects going.
In the meantime, I did a roundup on all R -Java projects.
They include- (more…)
I have recently become a Quora addict, and you can see why it is such a great site. If possible say hello to me there at
My latest favorite question-
What are the most hilarious pie charts?
I am only showing you some of the answers, you can see the rest yourself.
The Google Visualization API is a great way for people to make dashboards with slick graphics based on data without getting into the fine print of the scripting language itself. It utilizes the same tools as Google itself does, and makes visualizing data using API calls to the Visualization API. Thus a real-time customizable dashboard that is publishable to the internet can be created within minutes, and more importantly insights can be much more easily drawn from graphs than from looking at rows of tables and numbers.
- There are 41 gadgets (including made by both Google and third-party developers ) available in the Gadget Gallery ( https://developers.google.com/chart/interactive/docs/gadgetgallery)
- There are 12 kinds of charts available in the Chart Gallery (https://developers.google.com/chart/interactive/docs/gallery) .
- However there 26 additional charts in the charts page at https://developers.google.com/chart/interactive/docs/more_charts )
Building and embedding charts is simplified to a few steps
- Load the AJAX API
- Load the Visualization API and the appropriate package (like piechart or barchart from the kinds of chart)
- Set a callback to run when the Google Visualization API is loaded
- Within the Callback – It creates and populates a data table, instantiates the particular chart type chosen, passes in the data and draws it.
- Create the data table with appropriately named columns and data rows.
- Set chart options with Title, Width and Height
- Instantiate and draw the chart, passing in some options including the name and id
- Finally write the HTML/ Div that will hold the chart
You can simply copy and paste the code directly from https://developers.google.com/chart/interactive/docs/quick_start without getting into any details, and tweak them according to your data, chart preference and voila your web dashboard is ready!
That is the beauty of working with API- you can create and display genius ideas without messing with the scripting languages and code (too much). If you like to dive deeper into the API, you can look at the various objects at https://developers.google.com/chart/interactive/docs/reference
First launched in Mar 2008, Google Visualization API has indeed come a long way in making dashboards easier to build for people wanting to utilize advanced data visualization . It came about directly as a result of Google’s 2007 acquisition of GapMinder (of Hans Rosling fame).
As invariably and inevitably computing shifts to the cloud, visualization APIs will be very useful. Tableau Software has been a pioneer in selling data visualizing to the lucrative business intelligence and business dashboards community (you can see the Tableau Software API at http://onlinehelp.tableausoftware.com/v7.0/server/en-us/embed_api.htm ), and Google Visualization can do the same and capture business dashboard and visualization market , if there is more focus on integrating it from Google in it’s multiple and often confusing API offerings.
However as of now, this is quite simply the easiest way to create a web dashboard for your personal needs. Google guarantees 3 years of backward compatibility with this API and it is completely free.