Skip to main content

Week 2: Fixing the myMean Function and Understanding Flexibility in Function Inputs

 Week 2: Fixing the myMean Function and Understanding Flexibility in Function Inputs







Introduction

This week, my assignment involved evaluating and debugging a custom function called myMean, which calculates the mean (average) of a dataset. While the concept behind the function was simple, there were a couple of key issues with the inputs that prevented it from working correctly. Here's a walkthrough of the problem, how I approached it, and the final solution.


The Problem

The original function I was asked to evaluate was defined as follows:








The dataset provided to test this function was:





At first glance, the function seemed like it should calculate the sum of the vector and divide it by the number of elements. However, there were two key issues:

  1. Incorrect Input in sum(): The function was using sum(assignment), which referenced a variable not passed into the function.
  2. Incorrect Input in length(): The function referenced length(someData), which again was not defined within the function.


The Solution

To fix this, I simply corrected the inputs to ensure the function would work with whatever data is passed in. I modified the function as follows:














This new version of the function allows flexibility in choosing any dataset when calling the function, making it more reusable. By changing the parameter to x, the function works with any input dataset passed to it, without hard-coding a specific variable name.

Verifying the Result

To ensure my solution was correct, I compared the result from my myMean function with R's built-in mean() function:








Both functions produced the same result, confirming that my fix worked correctly.

Progress in Learning R

In addition to fixing the myMean function, I've also made progress in my R programming journey, especially with data manipulation using the dplyr package. This week, I learned and applied some important functions like:

  • mutate(): To create or transform variables.
  • select(): To choose specific columns.
  • filter(): To subset the rows based on conditions.
  • summarize(): To aggregate and summarize data.

Using these functions, I analyzed data for a research project I'm working on. These tools made it easier to transform and interpret my dataset efficiently.

You can find the code and details of this analysis on my GitHub repository. I’m also attaching this week’s notes for reference, which include the functions I learned and applied.

Conclusion

This assignment was a great exercise in understanding how R functions handle inputs and the importance of flexibility when defining parameters. By ensuring the function references the correct input, it can work with any dataset passed to it. Using R's built-in mean() function was a simple but effective way to verify that my custom function was performing as expected.



Comments

Popular posts from this blog

DNA Sequence Alignment and Visualization with "SequenceAlignment" Package

 DNA Sequence Alignment and Visualization with "SequenceAlignment" Package In bioinformatics, sequence alignment plays a crucial role in comparing biological sequences, especially DNA sequences. It helps in identifying similarities, differences, and evolutionary relationships between sequences. In this blog, we’ll explore how to use the SequenceAlignment R package for performing sequence alignments, visualizing the results with plots like barplots and heatmaps , and analyzing DNA sequences against multiple reference sequences stored in FASTA files. What is Sequence Alignment? Sequence alignment is the process of comparing two or more biological sequences (e.g., DNA, RNA, or proteins) to identify regions of similarity or difference. In DNA sequence alignment, the sequences are compared to see how closely they match, which can provide insights into genetic similarities, mutations, or evolutionary trends. The SequenceAlignment Package The SequenceAlignment package is a powerf...

Journey Through R Programming: Week 1

  Journey Through R Programming: Week 1 Introduction Welcome to my blog! As part of my Open Source R course with Professor Alon Friedman at the University of South Florida, I’m excited to document my weekly progress in learning R programming. A bit about me: I’m currently pursuing a Master’s in Bioinformatics & Computational Biology, following an undergrad in Biotechnology. My programming journey began with Python through the “100 Days of Code: The Complete Python Pro Bootcamp” on Udemy, which included around 8 mini projects. This experience has made transitioning to R a bit smoother, as many concepts overlap. To support my learning, I’m using the book  The Art of R Programming  and the edX course  Data Science: R Basics  from Harvard University. These resources have been invaluable in deepening my understanding of R. Summary 1. Function Creation Objective: Create a function to count the number of odd numbers in a vector. Code: What I Learned: The modulus op...

Week 9 : Exploring Cancer Survival Data Visualization in R

 Week 9 : Exploring Cancer Survival Data Visualization in R In this Assignment, I explored ways to visualize cancer survival data across different organs using a variety of R plotting methods, including base R’s   barplot() ,   ggplot2 , and an   xyplot()   with   lattice . Here’s a breakdown of the journey, the challenges faced, and what I learned along the way. The Data: Mean Survival Time by Organ The dataset I worked with contains information on the survival times across different organs from cancer . To understand the average survival time for each organ, I first calculated the mean survival time by using the following code: Once I had the mean survival times, I set out to visualize the data using four different approaches, each with its unique set of functionalities and aesthetics. 1. Basic Bar Plot with Base R My first plot used a simple   barplot()   to display the mean survival times. This method provided a quick and straightforward way t...